00:00:00.000 Started by upstream project "autotest-per-patch" build number 121034 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.096 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.097 The recommended git tool is: git 00:00:00.097 using credential 00000000-0000-0000-0000-000000000002 00:00:00.099 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.135 Fetching changes from the remote Git repository 00:00:00.137 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.196 Using shallow fetch with depth 1 00:00:00.196 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.196 > git --version # timeout=10 00:00:00.234 > git --version # 'git version 2.39.2' 00:00:00.234 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.235 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.235 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.592 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.603 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.613 Checking out Revision 6e1fadd1eee50389429f9abb33dde5face8ca717 (FETCH_HEAD) 00:00:05.613 > git config core.sparsecheckout # timeout=10 00:00:05.622 > git read-tree -mu HEAD # timeout=10 00:00:05.637 > git checkout -f 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=5 00:00:05.653 Commit message: "pool: attach build logs for failed merge builds" 00:00:05.654 > git rev-list --no-walk 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=10 00:00:05.777 [Pipeline] Start of Pipeline 00:00:05.792 [Pipeline] library 00:00:05.793 Loading library shm_lib@master 00:00:05.793 Library shm_lib@master is cached. Copying from home. 00:00:05.812 [Pipeline] node 00:00:20.814 Still waiting to schedule task 00:00:20.814 Waiting for next available executor on ‘vagrant-vm-host’ 00:07:28.509 Running on VM-host-WFP7 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:07:28.511 [Pipeline] { 00:07:28.522 [Pipeline] catchError 00:07:28.524 [Pipeline] { 00:07:28.539 [Pipeline] wrap 00:07:28.549 [Pipeline] { 00:07:28.558 [Pipeline] stage 00:07:28.560 [Pipeline] { (Prologue) 00:07:28.584 [Pipeline] echo 00:07:28.585 Node: VM-host-WFP7 00:07:28.591 [Pipeline] cleanWs 00:07:28.600 [WS-CLEANUP] Deleting project workspace... 00:07:28.600 [WS-CLEANUP] Deferred wipeout is used... 00:07:28.607 [WS-CLEANUP] done 00:07:28.785 [Pipeline] setCustomBuildProperty 00:07:28.846 [Pipeline] nodesByLabel 00:07:28.848 Found a total of 1 nodes with the 'sorcerer' label 00:07:28.856 [Pipeline] httpRequest 00:07:28.860 HttpMethod: GET 00:07:28.860 URL: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:07:28.863 Sending request to url: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:07:28.864 Response Code: HTTP/1.1 200 OK 00:07:28.865 Success: Status code 200 is in the accepted range: 200,404 00:07:28.865 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:07:29.136 [Pipeline] sh 00:07:29.421 + tar --no-same-owner -xf jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:07:29.441 [Pipeline] httpRequest 00:07:29.446 HttpMethod: GET 00:07:29.447 URL: http://10.211.164.96/packages/spdk_4907d15656c12273dfe0c9bfdb03f10b212689b8.tar.gz 00:07:29.448 Sending request to url: http://10.211.164.96/packages/spdk_4907d15656c12273dfe0c9bfdb03f10b212689b8.tar.gz 00:07:29.448 Response Code: HTTP/1.1 200 OK 00:07:29.449 Success: Status code 200 is in the accepted range: 200,404 00:07:29.449 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_4907d15656c12273dfe0c9bfdb03f10b212689b8.tar.gz 00:07:33.194 [Pipeline] sh 00:07:33.476 + tar --no-same-owner -xf spdk_4907d15656c12273dfe0c9bfdb03f10b212689b8.tar.gz 00:07:36.780 [Pipeline] sh 00:07:37.071 + git -C spdk log --oneline -n5 00:07:37.071 4907d1565 lib/nvmf: deprecate [listen_]address.transport 00:07:37.071 ea150257d nvmf/rpc: fix input validation for nvmf_subsystem_add_listener 00:07:37.071 dd57ed3e8 sma: add listener check on vfio device creation 00:07:37.071 d36d2b7e8 doc: mark adrfam as optional 00:07:37.071 129e6ba3b test/nvmf: add missing remove listener discovery 00:07:37.091 [Pipeline] writeFile 00:07:37.107 [Pipeline] sh 00:07:37.393 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:07:37.405 [Pipeline] sh 00:07:37.686 + cat autorun-spdk.conf 00:07:37.686 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:37.686 SPDK_TEST_NVMF=1 00:07:37.686 SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:37.686 SPDK_TEST_URING=1 00:07:37.686 SPDK_TEST_USDT=1 00:07:37.686 SPDK_RUN_UBSAN=1 00:07:37.686 NET_TYPE=virt 00:07:37.686 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:37.693 RUN_NIGHTLY=0 00:07:37.694 [Pipeline] } 00:07:37.708 [Pipeline] // stage 00:07:37.721 [Pipeline] stage 00:07:37.723 [Pipeline] { (Run VM) 00:07:37.736 [Pipeline] sh 00:07:38.017 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:07:38.017 + echo 'Start stage prepare_nvme.sh' 00:07:38.017 Start stage prepare_nvme.sh 00:07:38.017 + [[ -n 7 ]] 00:07:38.017 + disk_prefix=ex7 00:07:38.017 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:07:38.017 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:07:38.017 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:07:38.017 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:38.017 ++ SPDK_TEST_NVMF=1 00:07:38.017 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:38.017 ++ SPDK_TEST_URING=1 00:07:38.017 ++ SPDK_TEST_USDT=1 00:07:38.017 ++ SPDK_RUN_UBSAN=1 00:07:38.017 ++ NET_TYPE=virt 00:07:38.017 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:38.017 ++ RUN_NIGHTLY=0 00:07:38.017 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:07:38.017 + nvme_files=() 00:07:38.017 + declare -A nvme_files 00:07:38.017 + backend_dir=/var/lib/libvirt/images/backends 00:07:38.017 + nvme_files['nvme.img']=5G 00:07:38.017 + nvme_files['nvme-cmb.img']=5G 00:07:38.017 + nvme_files['nvme-multi0.img']=4G 00:07:38.017 + nvme_files['nvme-multi1.img']=4G 00:07:38.017 + nvme_files['nvme-multi2.img']=4G 00:07:38.017 + nvme_files['nvme-openstack.img']=8G 00:07:38.017 + nvme_files['nvme-zns.img']=5G 00:07:38.017 + (( SPDK_TEST_NVME_PMR == 1 )) 00:07:38.017 + (( SPDK_TEST_FTL == 1 )) 00:07:38.017 + (( SPDK_TEST_NVME_FDP == 1 )) 00:07:38.017 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:07:38.017 + for nvme in "${!nvme_files[@]}" 00:07:38.017 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:07:38.017 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:07:38.017 + for nvme in "${!nvme_files[@]}" 00:07:38.017 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:07:38.017 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:07:38.017 + for nvme in "${!nvme_files[@]}" 00:07:38.017 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:07:38.017 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:07:38.017 + for nvme in "${!nvme_files[@]}" 00:07:38.017 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:07:38.017 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:07:38.017 + for nvme in "${!nvme_files[@]}" 00:07:38.017 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:07:38.017 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:07:38.017 + for nvme in "${!nvme_files[@]}" 00:07:38.017 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:07:38.017 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:07:38.017 + for nvme in "${!nvme_files[@]}" 00:07:38.017 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:07:38.275 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:07:38.275 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:07:38.275 + echo 'End stage prepare_nvme.sh' 00:07:38.275 End stage prepare_nvme.sh 00:07:38.354 [Pipeline] sh 00:07:38.663 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:07:38.663 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:07:38.663 00:07:38.663 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:07:38.663 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:07:38.663 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:07:38.663 HELP=0 00:07:38.663 DRY_RUN=0 00:07:38.663 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:07:38.663 NVME_DISKS_TYPE=nvme,nvme, 00:07:38.663 NVME_AUTO_CREATE=0 00:07:38.663 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:07:38.663 NVME_CMB=,, 00:07:38.663 NVME_PMR=,, 00:07:38.663 NVME_ZNS=,, 00:07:38.663 NVME_MS=,, 00:07:38.663 NVME_FDP=,, 00:07:38.663 SPDK_VAGRANT_DISTRO=fedora38 00:07:38.663 SPDK_VAGRANT_VMCPU=10 00:07:38.663 SPDK_VAGRANT_VMRAM=12288 00:07:38.663 SPDK_VAGRANT_PROVIDER=libvirt 00:07:38.663 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:07:38.663 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:07:38.663 SPDK_OPENSTACK_NETWORK=0 00:07:38.663 VAGRANT_PACKAGE_BOX=0 00:07:38.663 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:07:38.663 FORCE_DISTRO=true 00:07:38.663 VAGRANT_BOX_VERSION= 00:07:38.663 EXTRA_VAGRANTFILES= 00:07:38.663 NIC_MODEL=virtio 00:07:38.663 00:07:38.663 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:07:38.663 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:07:41.198 Bringing machine 'default' up with 'libvirt' provider... 00:07:42.133 ==> default: Creating image (snapshot of base box volume). 00:07:42.133 ==> default: Creating domain with the following settings... 00:07:42.133 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1713988763_cc23e4a92e79986eb843 00:07:42.133 ==> default: -- Domain type: kvm 00:07:42.133 ==> default: -- Cpus: 10 00:07:42.133 ==> default: -- Feature: acpi 00:07:42.133 ==> default: -- Feature: apic 00:07:42.133 ==> default: -- Feature: pae 00:07:42.133 ==> default: -- Memory: 12288M 00:07:42.133 ==> default: -- Memory Backing: hugepages: 00:07:42.133 ==> default: -- Management MAC: 00:07:42.133 ==> default: -- Loader: 00:07:42.133 ==> default: -- Nvram: 00:07:42.133 ==> default: -- Base box: spdk/fedora38 00:07:42.133 ==> default: -- Storage pool: default 00:07:42.133 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1713988763_cc23e4a92e79986eb843.img (20G) 00:07:42.133 ==> default: -- Volume Cache: default 00:07:42.133 ==> default: -- Kernel: 00:07:42.133 ==> default: -- Initrd: 00:07:42.133 ==> default: -- Graphics Type: vnc 00:07:42.133 ==> default: -- Graphics Port: -1 00:07:42.133 ==> default: -- Graphics IP: 127.0.0.1 00:07:42.133 ==> default: -- Graphics Password: Not defined 00:07:42.133 ==> default: -- Video Type: cirrus 00:07:42.133 ==> default: -- Video VRAM: 9216 00:07:42.133 ==> default: -- Sound Type: 00:07:42.133 ==> default: -- Keymap: en-us 00:07:42.133 ==> default: -- TPM Path: 00:07:42.133 ==> default: -- INPUT: type=mouse, bus=ps2 00:07:42.133 ==> default: -- Command line args: 00:07:42.133 ==> default: -> value=-device, 00:07:42.133 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:07:42.133 ==> default: -> value=-drive, 00:07:42.133 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:07:42.133 ==> default: -> value=-device, 00:07:42.133 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:42.133 ==> default: -> value=-device, 00:07:42.133 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:07:42.133 ==> default: -> value=-drive, 00:07:42.133 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:07:42.133 ==> default: -> value=-device, 00:07:42.133 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:42.133 ==> default: -> value=-drive, 00:07:42.133 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:07:42.133 ==> default: -> value=-device, 00:07:42.133 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:42.133 ==> default: -> value=-drive, 00:07:42.133 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:07:42.133 ==> default: -> value=-device, 00:07:42.133 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:42.392 ==> default: Creating shared folders metadata... 00:07:42.392 ==> default: Starting domain. 00:07:43.771 ==> default: Waiting for domain to get an IP address... 00:08:01.929 ==> default: Waiting for SSH to become available... 00:08:02.867 ==> default: Configuring and enabling network interfaces... 00:08:09.444 default: SSH address: 192.168.121.234:22 00:08:09.444 default: SSH username: vagrant 00:08:09.444 default: SSH auth method: private key 00:08:11.352 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:08:19.473 ==> default: Mounting SSHFS shared folder... 00:08:20.858 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:08:20.858 ==> default: Checking Mount.. 00:08:22.232 ==> default: Folder Successfully Mounted! 00:08:22.232 ==> default: Running provisioner: file... 00:08:23.184 default: ~/.gitconfig => .gitconfig 00:08:23.753 00:08:23.753 SUCCESS! 00:08:23.753 00:08:23.753 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:08:23.753 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:08:23.753 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:08:23.753 00:08:23.762 [Pipeline] } 00:08:23.779 [Pipeline] // stage 00:08:23.787 [Pipeline] dir 00:08:23.787 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:08:23.788 [Pipeline] { 00:08:23.800 [Pipeline] catchError 00:08:23.801 [Pipeline] { 00:08:23.815 [Pipeline] sh 00:08:24.096 + vagrant ssh-config --host vagrant 00:08:24.096 + sed -ne /^Host/,$p 00:08:24.096 + tee ssh_conf 00:08:26.650 Host vagrant 00:08:26.650 HostName 192.168.121.234 00:08:26.650 User vagrant 00:08:26.650 Port 22 00:08:26.650 UserKnownHostsFile /dev/null 00:08:26.650 StrictHostKeyChecking no 00:08:26.650 PasswordAuthentication no 00:08:26.650 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:08:26.650 IdentitiesOnly yes 00:08:26.650 LogLevel FATAL 00:08:26.650 ForwardAgent yes 00:08:26.650 ForwardX11 yes 00:08:26.650 00:08:26.670 [Pipeline] withEnv 00:08:26.672 [Pipeline] { 00:08:26.713 [Pipeline] sh 00:08:27.049 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:08:27.049 source /etc/os-release 00:08:27.049 [[ -e /image.version ]] && img=$(< /image.version) 00:08:27.049 # Minimal, systemd-like check. 00:08:27.049 if [[ -e /.dockerenv ]]; then 00:08:27.049 # Clear garbage from the node's name: 00:08:27.049 # agt-er_autotest_547-896 -> autotest_547-896 00:08:27.049 # $HOSTNAME is the actual container id 00:08:27.049 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:08:27.049 if mountpoint -q /etc/hostname; then 00:08:27.049 # We can assume this is a mount from a host where container is running, 00:08:27.049 # so fetch its hostname to easily identify the target swarm worker. 00:08:27.049 container="$(< /etc/hostname) ($agent)" 00:08:27.049 else 00:08:27.049 # Fallback 00:08:27.049 container=$agent 00:08:27.049 fi 00:08:27.049 fi 00:08:27.049 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:08:27.049 00:08:27.337 [Pipeline] } 00:08:27.364 [Pipeline] // withEnv 00:08:27.377 [Pipeline] setCustomBuildProperty 00:08:27.388 [Pipeline] stage 00:08:27.390 [Pipeline] { (Tests) 00:08:27.411 [Pipeline] sh 00:08:27.705 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:08:27.998 [Pipeline] timeout 00:08:27.998 Timeout set to expire in 30 min 00:08:28.000 [Pipeline] { 00:08:28.031 [Pipeline] sh 00:08:28.328 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:08:28.909 HEAD is now at 4907d1565 lib/nvmf: deprecate [listen_]address.transport 00:08:28.932 [Pipeline] sh 00:08:29.233 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:08:29.518 [Pipeline] sh 00:08:29.813 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:08:30.101 [Pipeline] sh 00:08:30.399 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:08:30.733 ++ readlink -f spdk_repo 00:08:30.733 + DIR_ROOT=/home/vagrant/spdk_repo 00:08:30.733 + [[ -n /home/vagrant/spdk_repo ]] 00:08:30.733 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:08:30.733 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:08:30.733 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:08:30.733 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:08:30.733 + [[ -d /home/vagrant/spdk_repo/output ]] 00:08:30.733 + cd /home/vagrant/spdk_repo 00:08:30.733 + source /etc/os-release 00:08:30.733 ++ NAME='Fedora Linux' 00:08:30.733 ++ VERSION='38 (Cloud Edition)' 00:08:30.733 ++ ID=fedora 00:08:30.733 ++ VERSION_ID=38 00:08:30.733 ++ VERSION_CODENAME= 00:08:30.733 ++ PLATFORM_ID=platform:f38 00:08:30.733 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:08:30.734 ++ ANSI_COLOR='0;38;2;60;110;180' 00:08:30.734 ++ LOGO=fedora-logo-icon 00:08:30.734 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:08:30.734 ++ HOME_URL=https://fedoraproject.org/ 00:08:30.734 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:08:30.734 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:08:30.734 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:08:30.734 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:08:30.734 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:08:30.734 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:08:30.734 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:08:30.734 ++ SUPPORT_END=2024-05-14 00:08:30.734 ++ VARIANT='Cloud Edition' 00:08:30.734 ++ VARIANT_ID=cloud 00:08:30.734 + uname -a 00:08:30.734 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:08:30.734 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:31.327 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:31.327 Hugepages 00:08:31.327 node hugesize free / total 00:08:31.327 node0 1048576kB 0 / 0 00:08:31.327 node0 2048kB 0 / 0 00:08:31.327 00:08:31.327 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:31.327 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:31.327 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:31.328 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:31.328 + rm -f /tmp/spdk-ld-path 00:08:31.328 + source autorun-spdk.conf 00:08:31.328 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:31.328 ++ SPDK_TEST_NVMF=1 00:08:31.328 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:31.328 ++ SPDK_TEST_URING=1 00:08:31.328 ++ SPDK_TEST_USDT=1 00:08:31.328 ++ SPDK_RUN_UBSAN=1 00:08:31.328 ++ NET_TYPE=virt 00:08:31.328 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:31.328 ++ RUN_NIGHTLY=0 00:08:31.328 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:08:31.328 + [[ -n '' ]] 00:08:31.328 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:08:31.328 + for M in /var/spdk/build-*-manifest.txt 00:08:31.328 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:08:31.328 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:31.328 + for M in /var/spdk/build-*-manifest.txt 00:08:31.328 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:08:31.328 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:31.328 ++ uname 00:08:31.328 + [[ Linux == \L\i\n\u\x ]] 00:08:31.328 + sudo dmesg -T 00:08:31.591 + sudo dmesg --clear 00:08:31.591 + dmesg_pid=5314 00:08:31.591 + [[ Fedora Linux == FreeBSD ]] 00:08:31.591 + sudo dmesg -Tw 00:08:31.591 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:31.591 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:31.591 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:08:31.591 + [[ -x /usr/src/fio-static/fio ]] 00:08:31.591 + export FIO_BIN=/usr/src/fio-static/fio 00:08:31.591 + FIO_BIN=/usr/src/fio-static/fio 00:08:31.591 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:08:31.591 + [[ ! -v VFIO_QEMU_BIN ]] 00:08:31.591 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:08:31.591 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:31.591 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:31.591 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:08:31.591 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:31.591 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:31.591 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:31.591 Test configuration: 00:08:31.591 SPDK_RUN_FUNCTIONAL_TEST=1 00:08:31.591 SPDK_TEST_NVMF=1 00:08:31.591 SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:31.591 SPDK_TEST_URING=1 00:08:31.591 SPDK_TEST_USDT=1 00:08:31.591 SPDK_RUN_UBSAN=1 00:08:31.591 NET_TYPE=virt 00:08:31.591 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:31.591 RUN_NIGHTLY=0 20:00:13 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.591 20:00:13 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:31.591 20:00:13 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.591 20:00:13 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.591 20:00:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.591 20:00:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.591 20:00:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.591 20:00:13 -- paths/export.sh@5 -- $ export PATH 00:08:31.591 20:00:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.591 20:00:13 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:08:31.591 20:00:13 -- common/autobuild_common.sh@435 -- $ date +%s 00:08:31.591 20:00:13 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713988813.XXXXXX 00:08:31.591 20:00:13 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713988813.LAdA3s 00:08:31.591 20:00:13 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:08:31.591 20:00:13 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:08:31.591 20:00:13 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:08:31.591 20:00:13 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:08:31.591 20:00:13 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:08:31.591 20:00:13 -- common/autobuild_common.sh@451 -- $ get_config_params 00:08:31.591 20:00:13 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:08:31.591 20:00:13 -- common/autotest_common.sh@10 -- $ set +x 00:08:31.591 20:00:13 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:08:31.591 20:00:13 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:08:31.591 20:00:13 -- pm/common@17 -- $ local monitor 00:08:31.591 20:00:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:31.591 20:00:13 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5348 00:08:31.591 20:00:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:31.591 20:00:13 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5350 00:08:31.591 20:00:13 -- pm/common@21 -- $ date +%s 00:08:31.591 20:00:13 -- pm/common@26 -- $ sleep 1 00:08:31.591 20:00:13 -- pm/common@21 -- $ date +%s 00:08:31.591 20:00:13 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713988813 00:08:31.591 20:00:13 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713988813 00:08:31.852 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713988813_collect-vmstat.pm.log 00:08:31.852 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713988813_collect-cpu-load.pm.log 00:08:32.788 20:00:14 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:08:32.788 20:00:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:08:32.788 20:00:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:08:32.788 20:00:14 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:08:32.788 20:00:14 -- spdk/autobuild.sh@16 -- $ date -u 00:08:32.788 Wed Apr 24 08:00:14 PM UTC 2024 00:08:32.788 20:00:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:08:32.788 v24.05-pre-415-g4907d1565 00:08:32.788 20:00:14 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:08:32.788 20:00:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:08:32.788 20:00:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:08:32.788 20:00:14 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:08:32.788 20:00:14 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:08:32.788 20:00:14 -- common/autotest_common.sh@10 -- $ set +x 00:08:32.788 ************************************ 00:08:32.788 START TEST ubsan 00:08:32.788 ************************************ 00:08:32.788 using ubsan 00:08:32.788 20:00:14 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:08:32.788 00:08:32.788 real 0m0.001s 00:08:32.788 user 0m0.001s 00:08:32.788 sys 0m0.000s 00:08:32.788 20:00:14 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:08:32.788 20:00:14 -- common/autotest_common.sh@10 -- $ set +x 00:08:32.788 ************************************ 00:08:32.788 END TEST ubsan 00:08:32.788 ************************************ 00:08:32.788 20:00:14 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:08:32.788 20:00:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:08:32.788 20:00:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:08:32.788 20:00:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:08:32.788 20:00:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:08:32.788 20:00:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:08:32.788 20:00:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:08:32.788 20:00:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:08:32.788 20:00:14 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:08:33.047 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:33.047 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:33.616 Using 'verbs' RDMA provider 00:08:49.519 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:09:04.405 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:09:04.405 Creating mk/config.mk...done. 00:09:04.405 Creating mk/cc.flags.mk...done. 00:09:04.405 Type 'make' to build. 00:09:04.405 20:00:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:09:04.405 20:00:46 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:09:04.405 20:00:46 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:09:04.405 20:00:46 -- common/autotest_common.sh@10 -- $ set +x 00:09:04.405 ************************************ 00:09:04.405 START TEST make 00:09:04.405 ************************************ 00:09:04.405 20:00:46 -- common/autotest_common.sh@1111 -- $ make -j10 00:09:04.971 make[1]: Nothing to be done for 'all'. 00:09:17.228 The Meson build system 00:09:17.228 Version: 1.3.1 00:09:17.228 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:09:17.228 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:09:17.228 Build type: native build 00:09:17.228 Program cat found: YES (/usr/bin/cat) 00:09:17.228 Project name: DPDK 00:09:17.228 Project version: 23.11.0 00:09:17.228 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:09:17.228 C linker for the host machine: cc ld.bfd 2.39-16 00:09:17.228 Host machine cpu family: x86_64 00:09:17.228 Host machine cpu: x86_64 00:09:17.228 Message: ## Building in Developer Mode ## 00:09:17.228 Program pkg-config found: YES (/usr/bin/pkg-config) 00:09:17.228 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:09:17.228 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:09:17.228 Program python3 found: YES (/usr/bin/python3) 00:09:17.228 Program cat found: YES (/usr/bin/cat) 00:09:17.228 Compiler for C supports arguments -march=native: YES 00:09:17.228 Checking for size of "void *" : 8 00:09:17.228 Checking for size of "void *" : 8 (cached) 00:09:17.228 Library m found: YES 00:09:17.228 Library numa found: YES 00:09:17.228 Has header "numaif.h" : YES 00:09:17.228 Library fdt found: NO 00:09:17.228 Library execinfo found: NO 00:09:17.228 Has header "execinfo.h" : YES 00:09:17.228 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:09:17.228 Run-time dependency libarchive found: NO (tried pkgconfig) 00:09:17.228 Run-time dependency libbsd found: NO (tried pkgconfig) 00:09:17.228 Run-time dependency jansson found: NO (tried pkgconfig) 00:09:17.228 Run-time dependency openssl found: YES 3.0.9 00:09:17.228 Run-time dependency libpcap found: YES 1.10.4 00:09:17.228 Has header "pcap.h" with dependency libpcap: YES 00:09:17.228 Compiler for C supports arguments -Wcast-qual: YES 00:09:17.228 Compiler for C supports arguments -Wdeprecated: YES 00:09:17.228 Compiler for C supports arguments -Wformat: YES 00:09:17.228 Compiler for C supports arguments -Wformat-nonliteral: NO 00:09:17.228 Compiler for C supports arguments -Wformat-security: NO 00:09:17.228 Compiler for C supports arguments -Wmissing-declarations: YES 00:09:17.228 Compiler for C supports arguments -Wmissing-prototypes: YES 00:09:17.228 Compiler for C supports arguments -Wnested-externs: YES 00:09:17.228 Compiler for C supports arguments -Wold-style-definition: YES 00:09:17.228 Compiler for C supports arguments -Wpointer-arith: YES 00:09:17.228 Compiler for C supports arguments -Wsign-compare: YES 00:09:17.228 Compiler for C supports arguments -Wstrict-prototypes: YES 00:09:17.228 Compiler for C supports arguments -Wundef: YES 00:09:17.228 Compiler for C supports arguments -Wwrite-strings: YES 00:09:17.228 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:09:17.228 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:09:17.228 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:09:17.228 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:09:17.228 Program objdump found: YES (/usr/bin/objdump) 00:09:17.228 Compiler for C supports arguments -mavx512f: YES 00:09:17.228 Checking if "AVX512 checking" compiles: YES 00:09:17.228 Fetching value of define "__SSE4_2__" : 1 00:09:17.228 Fetching value of define "__AES__" : 1 00:09:17.228 Fetching value of define "__AVX__" : 1 00:09:17.228 Fetching value of define "__AVX2__" : 1 00:09:17.228 Fetching value of define "__AVX512BW__" : 1 00:09:17.228 Fetching value of define "__AVX512CD__" : 1 00:09:17.228 Fetching value of define "__AVX512DQ__" : 1 00:09:17.228 Fetching value of define "__AVX512F__" : 1 00:09:17.228 Fetching value of define "__AVX512VL__" : 1 00:09:17.228 Fetching value of define "__PCLMUL__" : 1 00:09:17.228 Fetching value of define "__RDRND__" : 1 00:09:17.228 Fetching value of define "__RDSEED__" : 1 00:09:17.228 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:09:17.228 Fetching value of define "__znver1__" : (undefined) 00:09:17.228 Fetching value of define "__znver2__" : (undefined) 00:09:17.228 Fetching value of define "__znver3__" : (undefined) 00:09:17.228 Fetching value of define "__znver4__" : (undefined) 00:09:17.228 Compiler for C supports arguments -Wno-format-truncation: YES 00:09:17.228 Message: lib/log: Defining dependency "log" 00:09:17.228 Message: lib/kvargs: Defining dependency "kvargs" 00:09:17.228 Message: lib/telemetry: Defining dependency "telemetry" 00:09:17.228 Checking for function "getentropy" : NO 00:09:17.228 Message: lib/eal: Defining dependency "eal" 00:09:17.228 Message: lib/ring: Defining dependency "ring" 00:09:17.228 Message: lib/rcu: Defining dependency "rcu" 00:09:17.228 Message: lib/mempool: Defining dependency "mempool" 00:09:17.228 Message: lib/mbuf: Defining dependency "mbuf" 00:09:17.228 Fetching value of define "__PCLMUL__" : 1 (cached) 00:09:17.228 Fetching value of define "__AVX512F__" : 1 (cached) 00:09:17.228 Fetching value of define "__AVX512BW__" : 1 (cached) 00:09:17.228 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:09:17.228 Fetching value of define "__AVX512VL__" : 1 (cached) 00:09:17.228 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:09:17.228 Compiler for C supports arguments -mpclmul: YES 00:09:17.228 Compiler for C supports arguments -maes: YES 00:09:17.228 Compiler for C supports arguments -mavx512f: YES (cached) 00:09:17.228 Compiler for C supports arguments -mavx512bw: YES 00:09:17.228 Compiler for C supports arguments -mavx512dq: YES 00:09:17.228 Compiler for C supports arguments -mavx512vl: YES 00:09:17.228 Compiler for C supports arguments -mvpclmulqdq: YES 00:09:17.228 Compiler for C supports arguments -mavx2: YES 00:09:17.228 Compiler for C supports arguments -mavx: YES 00:09:17.228 Message: lib/net: Defining dependency "net" 00:09:17.228 Message: lib/meter: Defining dependency "meter" 00:09:17.228 Message: lib/ethdev: Defining dependency "ethdev" 00:09:17.228 Message: lib/pci: Defining dependency "pci" 00:09:17.228 Message: lib/cmdline: Defining dependency "cmdline" 00:09:17.228 Message: lib/hash: Defining dependency "hash" 00:09:17.228 Message: lib/timer: Defining dependency "timer" 00:09:17.228 Message: lib/compressdev: Defining dependency "compressdev" 00:09:17.228 Message: lib/cryptodev: Defining dependency "cryptodev" 00:09:17.228 Message: lib/dmadev: Defining dependency "dmadev" 00:09:17.228 Compiler for C supports arguments -Wno-cast-qual: YES 00:09:17.228 Message: lib/power: Defining dependency "power" 00:09:17.228 Message: lib/reorder: Defining dependency "reorder" 00:09:17.228 Message: lib/security: Defining dependency "security" 00:09:17.228 Has header "linux/userfaultfd.h" : YES 00:09:17.228 Has header "linux/vduse.h" : YES 00:09:17.228 Message: lib/vhost: Defining dependency "vhost" 00:09:17.228 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:09:17.228 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:09:17.228 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:09:17.228 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:09:17.228 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:09:17.228 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:09:17.228 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:09:17.228 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:09:17.228 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:09:17.228 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:09:17.228 Program doxygen found: YES (/usr/bin/doxygen) 00:09:17.228 Configuring doxy-api-html.conf using configuration 00:09:17.228 Configuring doxy-api-man.conf using configuration 00:09:17.228 Program mandb found: YES (/usr/bin/mandb) 00:09:17.228 Program sphinx-build found: NO 00:09:17.228 Configuring rte_build_config.h using configuration 00:09:17.228 Message: 00:09:17.228 ================= 00:09:17.228 Applications Enabled 00:09:17.228 ================= 00:09:17.228 00:09:17.228 apps: 00:09:17.228 00:09:17.228 00:09:17.228 Message: 00:09:17.228 ================= 00:09:17.228 Libraries Enabled 00:09:17.228 ================= 00:09:17.228 00:09:17.229 libs: 00:09:17.229 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:09:17.229 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:09:17.229 cryptodev, dmadev, power, reorder, security, vhost, 00:09:17.229 00:09:17.229 Message: 00:09:17.229 =============== 00:09:17.229 Drivers Enabled 00:09:17.229 =============== 00:09:17.229 00:09:17.229 common: 00:09:17.229 00:09:17.229 bus: 00:09:17.229 pci, vdev, 00:09:17.229 mempool: 00:09:17.229 ring, 00:09:17.229 dma: 00:09:17.229 00:09:17.229 net: 00:09:17.229 00:09:17.229 crypto: 00:09:17.229 00:09:17.229 compress: 00:09:17.229 00:09:17.229 vdpa: 00:09:17.229 00:09:17.229 00:09:17.229 Message: 00:09:17.229 ================= 00:09:17.229 Content Skipped 00:09:17.229 ================= 00:09:17.229 00:09:17.229 apps: 00:09:17.229 dumpcap: explicitly disabled via build config 00:09:17.229 graph: explicitly disabled via build config 00:09:17.229 pdump: explicitly disabled via build config 00:09:17.229 proc-info: explicitly disabled via build config 00:09:17.229 test-acl: explicitly disabled via build config 00:09:17.229 test-bbdev: explicitly disabled via build config 00:09:17.229 test-cmdline: explicitly disabled via build config 00:09:17.229 test-compress-perf: explicitly disabled via build config 00:09:17.229 test-crypto-perf: explicitly disabled via build config 00:09:17.229 test-dma-perf: explicitly disabled via build config 00:09:17.229 test-eventdev: explicitly disabled via build config 00:09:17.229 test-fib: explicitly disabled via build config 00:09:17.229 test-flow-perf: explicitly disabled via build config 00:09:17.229 test-gpudev: explicitly disabled via build config 00:09:17.229 test-mldev: explicitly disabled via build config 00:09:17.229 test-pipeline: explicitly disabled via build config 00:09:17.229 test-pmd: explicitly disabled via build config 00:09:17.229 test-regex: explicitly disabled via build config 00:09:17.229 test-sad: explicitly disabled via build config 00:09:17.229 test-security-perf: explicitly disabled via build config 00:09:17.229 00:09:17.229 libs: 00:09:17.229 metrics: explicitly disabled via build config 00:09:17.229 acl: explicitly disabled via build config 00:09:17.229 bbdev: explicitly disabled via build config 00:09:17.229 bitratestats: explicitly disabled via build config 00:09:17.229 bpf: explicitly disabled via build config 00:09:17.229 cfgfile: explicitly disabled via build config 00:09:17.229 distributor: explicitly disabled via build config 00:09:17.229 efd: explicitly disabled via build config 00:09:17.229 eventdev: explicitly disabled via build config 00:09:17.229 dispatcher: explicitly disabled via build config 00:09:17.229 gpudev: explicitly disabled via build config 00:09:17.229 gro: explicitly disabled via build config 00:09:17.229 gso: explicitly disabled via build config 00:09:17.229 ip_frag: explicitly disabled via build config 00:09:17.229 jobstats: explicitly disabled via build config 00:09:17.229 latencystats: explicitly disabled via build config 00:09:17.229 lpm: explicitly disabled via build config 00:09:17.229 member: explicitly disabled via build config 00:09:17.229 pcapng: explicitly disabled via build config 00:09:17.229 rawdev: explicitly disabled via build config 00:09:17.229 regexdev: explicitly disabled via build config 00:09:17.229 mldev: explicitly disabled via build config 00:09:17.229 rib: explicitly disabled via build config 00:09:17.229 sched: explicitly disabled via build config 00:09:17.229 stack: explicitly disabled via build config 00:09:17.229 ipsec: explicitly disabled via build config 00:09:17.229 pdcp: explicitly disabled via build config 00:09:17.229 fib: explicitly disabled via build config 00:09:17.229 port: explicitly disabled via build config 00:09:17.229 pdump: explicitly disabled via build config 00:09:17.229 table: explicitly disabled via build config 00:09:17.229 pipeline: explicitly disabled via build config 00:09:17.229 graph: explicitly disabled via build config 00:09:17.229 node: explicitly disabled via build config 00:09:17.229 00:09:17.229 drivers: 00:09:17.229 common/cpt: not in enabled drivers build config 00:09:17.229 common/dpaax: not in enabled drivers build config 00:09:17.229 common/iavf: not in enabled drivers build config 00:09:17.229 common/idpf: not in enabled drivers build config 00:09:17.229 common/mvep: not in enabled drivers build config 00:09:17.229 common/octeontx: not in enabled drivers build config 00:09:17.229 bus/auxiliary: not in enabled drivers build config 00:09:17.229 bus/cdx: not in enabled drivers build config 00:09:17.229 bus/dpaa: not in enabled drivers build config 00:09:17.229 bus/fslmc: not in enabled drivers build config 00:09:17.229 bus/ifpga: not in enabled drivers build config 00:09:17.229 bus/platform: not in enabled drivers build config 00:09:17.229 bus/vmbus: not in enabled drivers build config 00:09:17.229 common/cnxk: not in enabled drivers build config 00:09:17.229 common/mlx5: not in enabled drivers build config 00:09:17.229 common/nfp: not in enabled drivers build config 00:09:17.229 common/qat: not in enabled drivers build config 00:09:17.229 common/sfc_efx: not in enabled drivers build config 00:09:17.229 mempool/bucket: not in enabled drivers build config 00:09:17.229 mempool/cnxk: not in enabled drivers build config 00:09:17.229 mempool/dpaa: not in enabled drivers build config 00:09:17.229 mempool/dpaa2: not in enabled drivers build config 00:09:17.229 mempool/octeontx: not in enabled drivers build config 00:09:17.229 mempool/stack: not in enabled drivers build config 00:09:17.229 dma/cnxk: not in enabled drivers build config 00:09:17.229 dma/dpaa: not in enabled drivers build config 00:09:17.229 dma/dpaa2: not in enabled drivers build config 00:09:17.229 dma/hisilicon: not in enabled drivers build config 00:09:17.229 dma/idxd: not in enabled drivers build config 00:09:17.229 dma/ioat: not in enabled drivers build config 00:09:17.229 dma/skeleton: not in enabled drivers build config 00:09:17.229 net/af_packet: not in enabled drivers build config 00:09:17.229 net/af_xdp: not in enabled drivers build config 00:09:17.229 net/ark: not in enabled drivers build config 00:09:17.229 net/atlantic: not in enabled drivers build config 00:09:17.229 net/avp: not in enabled drivers build config 00:09:17.229 net/axgbe: not in enabled drivers build config 00:09:17.229 net/bnx2x: not in enabled drivers build config 00:09:17.229 net/bnxt: not in enabled drivers build config 00:09:17.229 net/bonding: not in enabled drivers build config 00:09:17.229 net/cnxk: not in enabled drivers build config 00:09:17.229 net/cpfl: not in enabled drivers build config 00:09:17.229 net/cxgbe: not in enabled drivers build config 00:09:17.229 net/dpaa: not in enabled drivers build config 00:09:17.229 net/dpaa2: not in enabled drivers build config 00:09:17.229 net/e1000: not in enabled drivers build config 00:09:17.229 net/ena: not in enabled drivers build config 00:09:17.229 net/enetc: not in enabled drivers build config 00:09:17.229 net/enetfec: not in enabled drivers build config 00:09:17.229 net/enic: not in enabled drivers build config 00:09:17.229 net/failsafe: not in enabled drivers build config 00:09:17.229 net/fm10k: not in enabled drivers build config 00:09:17.229 net/gve: not in enabled drivers build config 00:09:17.229 net/hinic: not in enabled drivers build config 00:09:17.229 net/hns3: not in enabled drivers build config 00:09:17.229 net/i40e: not in enabled drivers build config 00:09:17.229 net/iavf: not in enabled drivers build config 00:09:17.229 net/ice: not in enabled drivers build config 00:09:17.229 net/idpf: not in enabled drivers build config 00:09:17.229 net/igc: not in enabled drivers build config 00:09:17.229 net/ionic: not in enabled drivers build config 00:09:17.229 net/ipn3ke: not in enabled drivers build config 00:09:17.229 net/ixgbe: not in enabled drivers build config 00:09:17.229 net/mana: not in enabled drivers build config 00:09:17.229 net/memif: not in enabled drivers build config 00:09:17.229 net/mlx4: not in enabled drivers build config 00:09:17.229 net/mlx5: not in enabled drivers build config 00:09:17.229 net/mvneta: not in enabled drivers build config 00:09:17.229 net/mvpp2: not in enabled drivers build config 00:09:17.229 net/netvsc: not in enabled drivers build config 00:09:17.229 net/nfb: not in enabled drivers build config 00:09:17.229 net/nfp: not in enabled drivers build config 00:09:17.229 net/ngbe: not in enabled drivers build config 00:09:17.229 net/null: not in enabled drivers build config 00:09:17.229 net/octeontx: not in enabled drivers build config 00:09:17.229 net/octeon_ep: not in enabled drivers build config 00:09:17.229 net/pcap: not in enabled drivers build config 00:09:17.229 net/pfe: not in enabled drivers build config 00:09:17.229 net/qede: not in enabled drivers build config 00:09:17.229 net/ring: not in enabled drivers build config 00:09:17.229 net/sfc: not in enabled drivers build config 00:09:17.229 net/softnic: not in enabled drivers build config 00:09:17.229 net/tap: not in enabled drivers build config 00:09:17.229 net/thunderx: not in enabled drivers build config 00:09:17.229 net/txgbe: not in enabled drivers build config 00:09:17.229 net/vdev_netvsc: not in enabled drivers build config 00:09:17.229 net/vhost: not in enabled drivers build config 00:09:17.229 net/virtio: not in enabled drivers build config 00:09:17.229 net/vmxnet3: not in enabled drivers build config 00:09:17.229 raw/*: missing internal dependency, "rawdev" 00:09:17.229 crypto/armv8: not in enabled drivers build config 00:09:17.229 crypto/bcmfs: not in enabled drivers build config 00:09:17.229 crypto/caam_jr: not in enabled drivers build config 00:09:17.229 crypto/ccp: not in enabled drivers build config 00:09:17.229 crypto/cnxk: not in enabled drivers build config 00:09:17.229 crypto/dpaa_sec: not in enabled drivers build config 00:09:17.229 crypto/dpaa2_sec: not in enabled drivers build config 00:09:17.229 crypto/ipsec_mb: not in enabled drivers build config 00:09:17.229 crypto/mlx5: not in enabled drivers build config 00:09:17.229 crypto/mvsam: not in enabled drivers build config 00:09:17.229 crypto/nitrox: not in enabled drivers build config 00:09:17.229 crypto/null: not in enabled drivers build config 00:09:17.229 crypto/octeontx: not in enabled drivers build config 00:09:17.229 crypto/openssl: not in enabled drivers build config 00:09:17.229 crypto/scheduler: not in enabled drivers build config 00:09:17.229 crypto/uadk: not in enabled drivers build config 00:09:17.229 crypto/virtio: not in enabled drivers build config 00:09:17.229 compress/isal: not in enabled drivers build config 00:09:17.229 compress/mlx5: not in enabled drivers build config 00:09:17.229 compress/octeontx: not in enabled drivers build config 00:09:17.229 compress/zlib: not in enabled drivers build config 00:09:17.229 regex/*: missing internal dependency, "regexdev" 00:09:17.229 ml/*: missing internal dependency, "mldev" 00:09:17.229 vdpa/ifc: not in enabled drivers build config 00:09:17.229 vdpa/mlx5: not in enabled drivers build config 00:09:17.229 vdpa/nfp: not in enabled drivers build config 00:09:17.229 vdpa/sfc: not in enabled drivers build config 00:09:17.229 event/*: missing internal dependency, "eventdev" 00:09:17.229 baseband/*: missing internal dependency, "bbdev" 00:09:17.229 gpu/*: missing internal dependency, "gpudev" 00:09:17.229 00:09:17.229 00:09:17.229 Build targets in project: 85 00:09:17.229 00:09:17.229 DPDK 23.11.0 00:09:17.229 00:09:17.229 User defined options 00:09:17.229 buildtype : debug 00:09:17.229 default_library : shared 00:09:17.229 libdir : lib 00:09:17.230 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:17.230 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:09:17.230 c_link_args : 00:09:17.230 cpu_instruction_set: native 00:09:17.230 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:09:17.230 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:09:17.230 enable_docs : false 00:09:17.230 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:09:17.230 enable_kmods : false 00:09:17.230 tests : false 00:09:17.230 00:09:17.230 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:09:17.230 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:09:17.230 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:09:17.230 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:09:17.230 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:09:17.230 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:09:17.230 [5/265] Linking static target lib/librte_kvargs.a 00:09:17.230 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:09:17.230 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:09:17.230 [8/265] Linking static target lib/librte_log.a 00:09:17.230 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:09:17.230 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:09:17.230 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:09:17.230 [12/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:09:17.230 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:09:17.230 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:09:17.230 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:09:17.230 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:09:17.230 [17/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:09:17.230 [18/265] Linking static target lib/librte_telemetry.a 00:09:17.230 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:09:17.230 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:09:17.230 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:09:17.230 [22/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:09:17.230 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:09:17.230 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:09:17.230 [25/265] Linking target lib/librte_log.so.24.0 00:09:17.230 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:09:17.230 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:09:17.487 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:09:17.487 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:09:17.487 [30/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:09:17.487 [31/265] Linking target lib/librte_kvargs.so.24.0 00:09:17.487 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:09:17.487 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:09:17.805 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:09:17.805 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:09:17.805 [36/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:09:17.805 [37/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:09:17.805 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:09:17.805 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:09:17.805 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:09:17.805 [41/265] Linking target lib/librte_telemetry.so.24.0 00:09:17.805 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:09:17.805 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:09:17.805 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:09:18.074 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:09:18.074 [46/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:09:18.074 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:09:18.332 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:09:18.332 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:09:18.332 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:09:18.332 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:09:18.332 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:09:18.332 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:09:18.590 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:09:18.590 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:09:18.590 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:09:18.590 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:09:18.590 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:09:18.590 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:09:18.850 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:09:18.850 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:09:18.850 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:09:18.850 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:09:19.108 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:09:19.108 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:09:19.108 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:09:19.108 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:09:19.108 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:09:19.367 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:09:19.367 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:09:19.367 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:09:19.367 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:09:19.367 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:09:19.367 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:09:19.367 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:09:19.367 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:09:19.367 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:09:19.625 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:09:19.625 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:09:19.625 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:09:19.884 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:09:19.884 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:09:19.884 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:09:19.884 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:09:20.142 [85/265] Linking static target lib/librte_eal.a 00:09:20.142 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:09:20.142 [87/265] Linking static target lib/librte_ring.a 00:09:20.142 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:09:20.142 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:09:20.142 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:09:20.142 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:09:20.142 [92/265] Linking static target lib/librte_rcu.a 00:09:20.401 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:09:20.401 [94/265] Linking static target lib/librte_mempool.a 00:09:20.401 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:09:20.401 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:09:20.677 [97/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:09:20.677 [98/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:09:20.678 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:09:20.936 [100/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:09:20.936 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:09:20.936 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:09:20.936 [103/265] Linking static target lib/librte_mbuf.a 00:09:20.936 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:09:21.194 [105/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:09:21.194 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:09:21.194 [107/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:09:21.194 [108/265] Linking static target lib/librte_meter.a 00:09:21.194 [109/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:09:21.194 [110/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:09:21.454 [111/265] Linking static target lib/librte_net.a 00:09:21.454 [112/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:09:21.454 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:09:21.712 [114/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:09:21.712 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:09:21.712 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:09:21.712 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:09:21.970 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:09:21.970 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:09:22.536 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:09:22.536 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:09:22.536 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:09:22.536 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:09:22.536 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:09:22.536 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:09:22.536 [126/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:09:22.536 [127/265] Linking static target lib/librte_pci.a 00:09:22.795 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:09:22.795 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:09:22.795 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:09:22.795 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:09:22.795 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:09:22.795 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:09:23.054 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:09:23.054 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:09:23.054 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:09:23.054 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:09:23.054 [138/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:23.054 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:09:23.054 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:09:23.054 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:09:23.054 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:09:23.054 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:09:23.054 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:09:23.311 [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:09:23.311 [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:09:23.311 [147/265] Linking static target lib/librte_cmdline.a 00:09:23.311 [148/265] Linking static target lib/librte_ethdev.a 00:09:23.311 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:09:23.311 [150/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:09:23.311 [151/265] Linking static target lib/librte_timer.a 00:09:23.569 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:09:23.569 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:09:23.828 [154/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:09:23.828 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:09:23.828 [156/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:09:23.828 [157/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:09:23.828 [158/265] Linking static target lib/librte_compressdev.a 00:09:23.828 [159/265] Linking static target lib/librte_hash.a 00:09:24.087 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:09:24.087 [161/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:09:24.087 [162/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:09:24.087 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:09:24.087 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:09:24.346 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:09:24.347 [166/265] Linking static target lib/librte_dmadev.a 00:09:24.347 [167/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:09:24.347 [168/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:09:24.606 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:09:24.606 [170/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:09:24.606 [171/265] Linking static target lib/librte_cryptodev.a 00:09:24.606 [172/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:09:24.606 [173/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:09:24.606 [174/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:24.866 [175/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:09:24.866 [176/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:09:24.866 [177/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:24.866 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:09:24.866 [179/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:09:24.866 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:09:25.126 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:09:25.126 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:09:25.127 [183/265] Linking static target lib/librte_power.a 00:09:25.385 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:09:25.385 [185/265] Linking static target lib/librte_reorder.a 00:09:25.385 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:09:25.385 [187/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:09:25.385 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:09:25.385 [189/265] Linking static target lib/librte_security.a 00:09:25.385 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:09:25.701 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:09:25.701 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:09:26.268 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:09:26.268 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:09:26.268 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:09:26.268 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:09:26.268 [197/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:09:26.268 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:09:26.527 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:09:26.527 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:09:26.787 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:09:26.787 [202/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:26.787 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:09:26.787 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:09:26.787 [205/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:09:27.046 [206/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:09:27.046 [207/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:09:27.046 [208/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:09:27.046 [209/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:09:27.046 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:09:27.046 [211/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:09:27.046 [212/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:27.046 [213/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:27.046 [214/265] Linking static target drivers/librte_bus_vdev.a 00:09:27.046 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:27.046 [216/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:27.046 [217/265] Linking static target drivers/librte_bus_pci.a 00:09:27.046 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:09:27.046 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:09:27.305 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:09:27.305 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:27.305 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:27.305 [223/265] Linking static target drivers/librte_mempool_ring.a 00:09:27.305 [224/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:27.873 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:28.132 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:09:28.392 [227/265] Linking static target lib/librte_vhost.a 00:09:30.942 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:09:30.942 [229/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:09:30.942 [230/265] Linking target lib/librte_eal.so.24.0 00:09:31.201 [231/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:09:31.201 [232/265] Linking target lib/librte_ring.so.24.0 00:09:31.201 [233/265] Linking target lib/librte_pci.so.24.0 00:09:31.201 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:09:31.201 [235/265] Linking target lib/librte_timer.so.24.0 00:09:31.201 [236/265] Linking target lib/librte_meter.so.24.0 00:09:31.201 [237/265] Linking target lib/librte_dmadev.so.24.0 00:09:31.201 [238/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:09:31.476 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:09:31.476 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:09:31.476 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:09:31.476 [242/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:09:31.476 [243/265] Linking target lib/librte_rcu.so.24.0 00:09:31.476 [244/265] Linking target lib/librte_mempool.so.24.0 00:09:31.476 [245/265] Linking target drivers/librte_bus_pci.so.24.0 00:09:31.476 [246/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:09:31.476 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:09:31.476 [248/265] Linking target drivers/librte_mempool_ring.so.24.0 00:09:31.476 [249/265] Linking target lib/librte_mbuf.so.24.0 00:09:31.735 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:09:31.735 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:09:31.735 [252/265] Linking target lib/librte_compressdev.so.24.0 00:09:31.735 [253/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:31.735 [254/265] Linking target lib/librte_reorder.so.24.0 00:09:31.735 [255/265] Linking target lib/librte_net.so.24.0 00:09:31.993 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:09:31.993 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:09:31.993 [258/265] Linking target lib/librte_hash.so.24.0 00:09:31.993 [259/265] Linking target lib/librte_cmdline.so.24.0 00:09:31.993 [260/265] Linking target lib/librte_security.so.24.0 00:09:31.993 [261/265] Linking target lib/librte_ethdev.so.24.0 00:09:31.993 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:09:32.253 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:09:32.253 [264/265] Linking target lib/librte_vhost.so.24.0 00:09:32.253 [265/265] Linking target lib/librte_power.so.24.0 00:09:32.253 INFO: autodetecting backend as ninja 00:09:32.253 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:09:33.192 CC lib/ut/ut.o 00:09:33.451 CC lib/log/log_flags.o 00:09:33.451 CC lib/log/log.o 00:09:33.451 CC lib/log/log_deprecated.o 00:09:33.451 CC lib/ut_mock/mock.o 00:09:33.451 LIB libspdk_ut.a 00:09:33.451 LIB libspdk_ut_mock.a 00:09:33.451 LIB libspdk_log.a 00:09:33.451 SO libspdk_ut.so.2.0 00:09:33.451 SO libspdk_ut_mock.so.6.0 00:09:33.451 SO libspdk_log.so.7.0 00:09:33.451 SYMLINK libspdk_ut.so 00:09:33.710 SYMLINK libspdk_ut_mock.so 00:09:33.710 SYMLINK libspdk_log.so 00:09:33.973 CC lib/dma/dma.o 00:09:33.973 CC lib/util/base64.o 00:09:33.973 CC lib/util/bit_array.o 00:09:33.973 CC lib/util/cpuset.o 00:09:33.973 CC lib/util/crc16.o 00:09:33.973 CC lib/ioat/ioat.o 00:09:33.973 CC lib/util/crc32.o 00:09:33.973 CXX lib/trace_parser/trace.o 00:09:33.973 CC lib/util/crc32c.o 00:09:33.973 CC lib/util/crc32_ieee.o 00:09:33.973 CC lib/vfio_user/host/vfio_user_pci.o 00:09:33.973 CC lib/util/crc64.o 00:09:33.973 CC lib/util/dif.o 00:09:33.973 CC lib/util/fd.o 00:09:33.973 LIB libspdk_dma.a 00:09:33.973 CC lib/util/file.o 00:09:33.973 CC lib/util/hexlify.o 00:09:34.236 SO libspdk_dma.so.4.0 00:09:34.236 LIB libspdk_ioat.a 00:09:34.236 CC lib/util/iov.o 00:09:34.236 CC lib/vfio_user/host/vfio_user.o 00:09:34.236 SYMLINK libspdk_dma.so 00:09:34.236 CC lib/util/math.o 00:09:34.236 CC lib/util/pipe.o 00:09:34.236 SO libspdk_ioat.so.7.0 00:09:34.236 CC lib/util/strerror_tls.o 00:09:34.236 SYMLINK libspdk_ioat.so 00:09:34.236 CC lib/util/string.o 00:09:34.236 CC lib/util/uuid.o 00:09:34.236 CC lib/util/fd_group.o 00:09:34.236 CC lib/util/xor.o 00:09:34.236 CC lib/util/zipf.o 00:09:34.236 LIB libspdk_vfio_user.a 00:09:34.496 SO libspdk_vfio_user.so.5.0 00:09:34.496 SYMLINK libspdk_vfio_user.so 00:09:34.496 LIB libspdk_util.a 00:09:34.756 SO libspdk_util.so.9.0 00:09:34.756 LIB libspdk_trace_parser.a 00:09:34.756 SYMLINK libspdk_util.so 00:09:34.756 SO libspdk_trace_parser.so.5.0 00:09:35.015 SYMLINK libspdk_trace_parser.so 00:09:35.015 CC lib/vmd/vmd.o 00:09:35.015 CC lib/json/json_parse.o 00:09:35.015 CC lib/vmd/led.o 00:09:35.015 CC lib/json/json_util.o 00:09:35.015 CC lib/json/json_write.o 00:09:35.015 CC lib/env_dpdk/env.o 00:09:35.015 CC lib/env_dpdk/memory.o 00:09:35.015 CC lib/rdma/common.o 00:09:35.015 CC lib/conf/conf.o 00:09:35.015 CC lib/idxd/idxd.o 00:09:35.015 CC lib/idxd/idxd_user.o 00:09:35.274 CC lib/env_dpdk/pci.o 00:09:35.274 LIB libspdk_conf.a 00:09:35.274 CC lib/rdma/rdma_verbs.o 00:09:35.274 SO libspdk_conf.so.6.0 00:09:35.274 SYMLINK libspdk_conf.so 00:09:35.274 CC lib/env_dpdk/init.o 00:09:35.274 CC lib/env_dpdk/threads.o 00:09:35.274 LIB libspdk_json.a 00:09:35.274 SO libspdk_json.so.6.0 00:09:35.274 CC lib/env_dpdk/pci_ioat.o 00:09:35.532 SYMLINK libspdk_json.so 00:09:35.532 CC lib/env_dpdk/pci_virtio.o 00:09:35.532 LIB libspdk_rdma.a 00:09:35.532 CC lib/env_dpdk/pci_vmd.o 00:09:35.533 SO libspdk_rdma.so.6.0 00:09:35.533 LIB libspdk_idxd.a 00:09:35.533 CC lib/env_dpdk/pci_idxd.o 00:09:35.533 SYMLINK libspdk_rdma.so 00:09:35.533 SO libspdk_idxd.so.12.0 00:09:35.533 CC lib/env_dpdk/pci_event.o 00:09:35.533 CC lib/env_dpdk/sigbus_handler.o 00:09:35.533 LIB libspdk_vmd.a 00:09:35.533 CC lib/env_dpdk/pci_dpdk.o 00:09:35.533 SYMLINK libspdk_idxd.so 00:09:35.533 CC lib/env_dpdk/pci_dpdk_2207.o 00:09:35.533 SO libspdk_vmd.so.6.0 00:09:35.533 CC lib/env_dpdk/pci_dpdk_2211.o 00:09:35.793 SYMLINK libspdk_vmd.so 00:09:35.793 CC lib/jsonrpc/jsonrpc_server.o 00:09:35.793 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:09:35.793 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:09:35.793 CC lib/jsonrpc/jsonrpc_client.o 00:09:36.052 LIB libspdk_jsonrpc.a 00:09:36.052 SO libspdk_jsonrpc.so.6.0 00:09:36.052 SYMLINK libspdk_jsonrpc.so 00:09:36.313 LIB libspdk_env_dpdk.a 00:09:36.313 SO libspdk_env_dpdk.so.14.0 00:09:36.572 CC lib/rpc/rpc.o 00:09:36.572 SYMLINK libspdk_env_dpdk.so 00:09:36.572 LIB libspdk_rpc.a 00:09:36.832 SO libspdk_rpc.so.6.0 00:09:36.832 SYMLINK libspdk_rpc.so 00:09:37.092 CC lib/keyring/keyring.o 00:09:37.092 CC lib/keyring/keyring_rpc.o 00:09:37.092 CC lib/trace/trace.o 00:09:37.092 CC lib/trace/trace_flags.o 00:09:37.092 CC lib/trace/trace_rpc.o 00:09:37.092 CC lib/notify/notify.o 00:09:37.092 CC lib/notify/notify_rpc.o 00:09:37.351 LIB libspdk_notify.a 00:09:37.351 SO libspdk_notify.so.6.0 00:09:37.351 LIB libspdk_trace.a 00:09:37.351 LIB libspdk_keyring.a 00:09:37.351 SO libspdk_trace.so.10.0 00:09:37.351 SYMLINK libspdk_notify.so 00:09:37.351 SO libspdk_keyring.so.1.0 00:09:37.611 SYMLINK libspdk_trace.so 00:09:37.611 SYMLINK libspdk_keyring.so 00:09:37.870 CC lib/thread/thread.o 00:09:37.870 CC lib/thread/iobuf.o 00:09:37.870 CC lib/sock/sock.o 00:09:37.870 CC lib/sock/sock_rpc.o 00:09:38.129 LIB libspdk_sock.a 00:09:38.388 SO libspdk_sock.so.9.0 00:09:38.388 SYMLINK libspdk_sock.so 00:09:38.646 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:38.646 CC lib/nvme/nvme_fabric.o 00:09:38.646 CC lib/nvme/nvme_ctrlr.o 00:09:38.646 CC lib/nvme/nvme_ns.o 00:09:38.646 CC lib/nvme/nvme_ns_cmd.o 00:09:38.646 CC lib/nvme/nvme_pcie_common.o 00:09:38.646 CC lib/nvme/nvme_pcie.o 00:09:38.646 CC lib/nvme/nvme_qpair.o 00:09:38.646 CC lib/nvme/nvme.o 00:09:39.214 LIB libspdk_thread.a 00:09:39.214 SO libspdk_thread.so.10.0 00:09:39.214 SYMLINK libspdk_thread.so 00:09:39.214 CC lib/nvme/nvme_quirks.o 00:09:39.472 CC lib/nvme/nvme_transport.o 00:09:39.472 CC lib/nvme/nvme_discovery.o 00:09:39.472 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:09:39.472 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:09:39.472 CC lib/nvme/nvme_tcp.o 00:09:39.472 CC lib/nvme/nvme_opal.o 00:09:39.730 CC lib/nvme/nvme_io_msg.o 00:09:39.730 CC lib/nvme/nvme_poll_group.o 00:09:39.990 CC lib/accel/accel.o 00:09:39.990 CC lib/nvme/nvme_zns.o 00:09:39.990 CC lib/nvme/nvme_stubs.o 00:09:40.248 CC lib/nvme/nvme_auth.o 00:09:40.248 CC lib/nvme/nvme_cuse.o 00:09:40.248 CC lib/blob/blobstore.o 00:09:40.507 CC lib/init/json_config.o 00:09:40.507 CC lib/virtio/virtio.o 00:09:40.507 CC lib/virtio/virtio_vhost_user.o 00:09:40.770 CC lib/init/subsystem.o 00:09:40.770 CC lib/init/subsystem_rpc.o 00:09:40.770 CC lib/init/rpc.o 00:09:41.030 CC lib/nvme/nvme_rdma.o 00:09:41.030 CC lib/accel/accel_rpc.o 00:09:41.030 CC lib/accel/accel_sw.o 00:09:41.030 CC lib/virtio/virtio_vfio_user.o 00:09:41.030 LIB libspdk_init.a 00:09:41.030 CC lib/blob/request.o 00:09:41.030 CC lib/blob/zeroes.o 00:09:41.030 SO libspdk_init.so.5.0 00:09:41.030 CC lib/virtio/virtio_pci.o 00:09:41.288 SYMLINK libspdk_init.so 00:09:41.288 CC lib/blob/blob_bs_dev.o 00:09:41.545 LIB libspdk_accel.a 00:09:41.545 CC lib/event/app.o 00:09:41.545 CC lib/event/app_rpc.o 00:09:41.545 CC lib/event/reactor.o 00:09:41.545 CC lib/event/log_rpc.o 00:09:41.545 CC lib/event/scheduler_static.o 00:09:41.545 LIB libspdk_virtio.a 00:09:41.546 SO libspdk_accel.so.15.0 00:09:41.546 SO libspdk_virtio.so.7.0 00:09:41.546 SYMLINK libspdk_accel.so 00:09:41.546 SYMLINK libspdk_virtio.so 00:09:41.804 LIB libspdk_event.a 00:09:41.804 CC lib/bdev/bdev.o 00:09:41.804 CC lib/bdev/bdev_rpc.o 00:09:41.804 CC lib/bdev/bdev_zone.o 00:09:41.804 CC lib/bdev/part.o 00:09:41.804 CC lib/bdev/scsi_nvme.o 00:09:41.804 SO libspdk_event.so.13.0 00:09:42.063 SYMLINK libspdk_event.so 00:09:42.063 LIB libspdk_nvme.a 00:09:42.321 SO libspdk_nvme.so.13.0 00:09:42.580 SYMLINK libspdk_nvme.so 00:09:42.839 LIB libspdk_blob.a 00:09:43.098 SO libspdk_blob.so.11.0 00:09:43.098 SYMLINK libspdk_blob.so 00:09:43.358 CC lib/blobfs/blobfs.o 00:09:43.358 CC lib/blobfs/tree.o 00:09:43.618 CC lib/lvol/lvol.o 00:09:44.191 LIB libspdk_blobfs.a 00:09:44.191 LIB libspdk_bdev.a 00:09:44.191 SO libspdk_blobfs.so.10.0 00:09:44.191 LIB libspdk_lvol.a 00:09:44.191 SO libspdk_bdev.so.15.0 00:09:44.191 SYMLINK libspdk_blobfs.so 00:09:44.450 SO libspdk_lvol.so.10.0 00:09:44.450 SYMLINK libspdk_bdev.so 00:09:44.450 SYMLINK libspdk_lvol.so 00:09:44.710 CC lib/ftl/ftl_core.o 00:09:44.711 CC lib/ftl/ftl_init.o 00:09:44.711 CC lib/ftl/ftl_layout.o 00:09:44.711 CC lib/ftl/ftl_debug.o 00:09:44.711 CC lib/ftl/ftl_sb.o 00:09:44.711 CC lib/ftl/ftl_io.o 00:09:44.711 CC lib/nvmf/ctrlr.o 00:09:44.711 CC lib/scsi/dev.o 00:09:44.711 CC lib/nbd/nbd.o 00:09:44.711 CC lib/ublk/ublk.o 00:09:44.973 CC lib/ublk/ublk_rpc.o 00:09:44.973 CC lib/scsi/lun.o 00:09:44.973 CC lib/ftl/ftl_l2p.o 00:09:44.973 CC lib/nvmf/ctrlr_discovery.o 00:09:44.973 CC lib/ftl/ftl_l2p_flat.o 00:09:44.973 CC lib/ftl/ftl_nv_cache.o 00:09:44.973 CC lib/nbd/nbd_rpc.o 00:09:44.973 CC lib/scsi/port.o 00:09:44.973 CC lib/ftl/ftl_band.o 00:09:45.238 CC lib/ftl/ftl_band_ops.o 00:09:45.238 CC lib/ftl/ftl_writer.o 00:09:45.238 CC lib/scsi/scsi.o 00:09:45.238 CC lib/ftl/ftl_rq.o 00:09:45.238 LIB libspdk_nbd.a 00:09:45.238 SO libspdk_nbd.so.7.0 00:09:45.238 SYMLINK libspdk_nbd.so 00:09:45.238 CC lib/ftl/ftl_reloc.o 00:09:45.238 CC lib/scsi/scsi_bdev.o 00:09:45.238 LIB libspdk_ublk.a 00:09:45.505 SO libspdk_ublk.so.3.0 00:09:45.505 CC lib/scsi/scsi_pr.o 00:09:45.505 CC lib/scsi/scsi_rpc.o 00:09:45.505 CC lib/nvmf/ctrlr_bdev.o 00:09:45.505 SYMLINK libspdk_ublk.so 00:09:45.505 CC lib/ftl/ftl_l2p_cache.o 00:09:45.506 CC lib/ftl/ftl_p2l.o 00:09:45.506 CC lib/scsi/task.o 00:09:45.506 CC lib/nvmf/subsystem.o 00:09:45.774 CC lib/ftl/mngt/ftl_mngt.o 00:09:45.774 CC lib/nvmf/nvmf.o 00:09:45.774 CC lib/nvmf/nvmf_rpc.o 00:09:45.774 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:09:45.774 LIB libspdk_scsi.a 00:09:45.774 SO libspdk_scsi.so.9.0 00:09:46.045 CC lib/nvmf/transport.o 00:09:46.045 SYMLINK libspdk_scsi.so 00:09:46.045 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:09:46.045 CC lib/nvmf/tcp.o 00:09:46.045 CC lib/nvmf/rdma.o 00:09:46.045 CC lib/ftl/mngt/ftl_mngt_startup.o 00:09:46.045 CC lib/ftl/mngt/ftl_mngt_md.o 00:09:46.045 CC lib/ftl/mngt/ftl_mngt_misc.o 00:09:46.313 CC lib/iscsi/conn.o 00:09:46.313 CC lib/iscsi/init_grp.o 00:09:46.313 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:09:46.313 CC lib/vhost/vhost.o 00:09:46.589 CC lib/vhost/vhost_rpc.o 00:09:46.589 CC lib/vhost/vhost_scsi.o 00:09:46.589 CC lib/vhost/vhost_blk.o 00:09:46.589 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:09:46.589 CC lib/ftl/mngt/ftl_mngt_band.o 00:09:46.589 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:09:46.869 CC lib/vhost/rte_vhost_user.o 00:09:46.869 CC lib/iscsi/iscsi.o 00:09:46.869 CC lib/iscsi/md5.o 00:09:46.869 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:09:47.129 CC lib/iscsi/param.o 00:09:47.129 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:09:47.129 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:09:47.129 CC lib/iscsi/portal_grp.o 00:09:47.387 CC lib/ftl/utils/ftl_conf.o 00:09:47.387 CC lib/iscsi/tgt_node.o 00:09:47.387 CC lib/ftl/utils/ftl_md.o 00:09:47.387 CC lib/iscsi/iscsi_subsystem.o 00:09:47.387 CC lib/iscsi/iscsi_rpc.o 00:09:47.387 CC lib/ftl/utils/ftl_mempool.o 00:09:47.387 CC lib/ftl/utils/ftl_bitmap.o 00:09:47.387 CC lib/iscsi/task.o 00:09:47.646 CC lib/ftl/utils/ftl_property.o 00:09:47.646 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:09:47.646 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:09:47.646 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:09:47.646 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:09:47.646 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:09:47.916 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:09:47.916 CC lib/ftl/upgrade/ftl_sb_v3.o 00:09:47.916 LIB libspdk_nvmf.a 00:09:47.916 CC lib/ftl/upgrade/ftl_sb_v5.o 00:09:47.916 LIB libspdk_vhost.a 00:09:47.916 CC lib/ftl/nvc/ftl_nvc_dev.o 00:09:47.916 SO libspdk_vhost.so.8.0 00:09:47.916 SO libspdk_nvmf.so.18.0 00:09:47.916 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:09:47.916 CC lib/ftl/base/ftl_base_dev.o 00:09:47.916 CC lib/ftl/base/ftl_base_bdev.o 00:09:47.916 CC lib/ftl/ftl_trace.o 00:09:48.174 SYMLINK libspdk_vhost.so 00:09:48.174 SYMLINK libspdk_nvmf.so 00:09:48.174 LIB libspdk_iscsi.a 00:09:48.431 LIB libspdk_ftl.a 00:09:48.431 SO libspdk_iscsi.so.8.0 00:09:48.431 SO libspdk_ftl.so.9.0 00:09:48.431 SYMLINK libspdk_iscsi.so 00:09:48.999 SYMLINK libspdk_ftl.so 00:09:49.258 CC module/env_dpdk/env_dpdk_rpc.o 00:09:49.258 CC module/blob/bdev/blob_bdev.o 00:09:49.258 CC module/accel/error/accel_error.o 00:09:49.258 CC module/accel/ioat/accel_ioat.o 00:09:49.258 CC module/accel/dsa/accel_dsa.o 00:09:49.258 CC module/accel/iaa/accel_iaa.o 00:09:49.258 CC module/scheduler/dynamic/scheduler_dynamic.o 00:09:49.258 CC module/sock/posix/posix.o 00:09:49.258 CC module/keyring/file/keyring.o 00:09:49.258 CC module/sock/uring/uring.o 00:09:49.258 LIB libspdk_env_dpdk_rpc.a 00:09:49.258 SO libspdk_env_dpdk_rpc.so.6.0 00:09:49.516 SYMLINK libspdk_env_dpdk_rpc.so 00:09:49.516 CC module/accel/ioat/accel_ioat_rpc.o 00:09:49.516 CC module/keyring/file/keyring_rpc.o 00:09:49.516 CC module/accel/error/accel_error_rpc.o 00:09:49.516 CC module/accel/iaa/accel_iaa_rpc.o 00:09:49.516 LIB libspdk_scheduler_dynamic.a 00:09:49.516 SO libspdk_scheduler_dynamic.so.4.0 00:09:49.516 CC module/accel/dsa/accel_dsa_rpc.o 00:09:49.516 LIB libspdk_blob_bdev.a 00:09:49.516 LIB libspdk_accel_ioat.a 00:09:49.516 SYMLINK libspdk_scheduler_dynamic.so 00:09:49.516 LIB libspdk_keyring_file.a 00:09:49.516 SO libspdk_blob_bdev.so.11.0 00:09:49.516 SO libspdk_accel_ioat.so.6.0 00:09:49.516 SO libspdk_keyring_file.so.1.0 00:09:49.516 LIB libspdk_accel_error.a 00:09:49.516 LIB libspdk_accel_iaa.a 00:09:49.516 SO libspdk_accel_iaa.so.3.0 00:09:49.516 LIB libspdk_accel_dsa.a 00:09:49.516 SYMLINK libspdk_blob_bdev.so 00:09:49.516 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:09:49.516 SYMLINK libspdk_accel_ioat.so 00:09:49.516 SO libspdk_accel_error.so.2.0 00:09:49.516 SYMLINK libspdk_keyring_file.so 00:09:49.516 SO libspdk_accel_dsa.so.5.0 00:09:49.774 SYMLINK libspdk_accel_iaa.so 00:09:49.774 SYMLINK libspdk_accel_error.so 00:09:49.774 SYMLINK libspdk_accel_dsa.so 00:09:49.774 CC module/scheduler/gscheduler/gscheduler.o 00:09:49.774 LIB libspdk_scheduler_dpdk_governor.a 00:09:49.774 SO libspdk_scheduler_dpdk_governor.so.4.0 00:09:49.774 CC module/bdev/gpt/gpt.o 00:09:49.774 CC module/bdev/error/vbdev_error.o 00:09:49.774 LIB libspdk_scheduler_gscheduler.a 00:09:49.774 CC module/bdev/malloc/bdev_malloc.o 00:09:49.774 SYMLINK libspdk_scheduler_dpdk_governor.so 00:09:49.774 CC module/bdev/lvol/vbdev_lvol.o 00:09:49.774 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:09:49.774 CC module/bdev/delay/vbdev_delay.o 00:09:49.774 SO libspdk_scheduler_gscheduler.so.4.0 00:09:49.774 CC module/blobfs/bdev/blobfs_bdev.o 00:09:49.774 LIB libspdk_sock_uring.a 00:09:50.033 LIB libspdk_sock_posix.a 00:09:50.033 SO libspdk_sock_uring.so.5.0 00:09:50.033 SYMLINK libspdk_scheduler_gscheduler.so 00:09:50.033 CC module/bdev/delay/vbdev_delay_rpc.o 00:09:50.033 SO libspdk_sock_posix.so.6.0 00:09:50.033 SYMLINK libspdk_sock_uring.so 00:09:50.033 CC module/bdev/gpt/vbdev_gpt.o 00:09:50.033 SYMLINK libspdk_sock_posix.so 00:09:50.033 CC module/bdev/error/vbdev_error_rpc.o 00:09:50.033 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:09:50.033 CC module/bdev/malloc/bdev_malloc_rpc.o 00:09:50.293 CC module/bdev/null/bdev_null.o 00:09:50.293 LIB libspdk_bdev_delay.a 00:09:50.293 LIB libspdk_bdev_error.a 00:09:50.293 SO libspdk_bdev_delay.so.6.0 00:09:50.293 LIB libspdk_blobfs_bdev.a 00:09:50.293 LIB libspdk_bdev_malloc.a 00:09:50.293 LIB libspdk_bdev_gpt.a 00:09:50.293 LIB libspdk_bdev_lvol.a 00:09:50.293 SO libspdk_bdev_error.so.6.0 00:09:50.293 SO libspdk_blobfs_bdev.so.6.0 00:09:50.293 CC module/bdev/nvme/bdev_nvme.o 00:09:50.293 SO libspdk_bdev_malloc.so.6.0 00:09:50.293 SO libspdk_bdev_gpt.so.6.0 00:09:50.293 SO libspdk_bdev_lvol.so.6.0 00:09:50.293 SYMLINK libspdk_bdev_delay.so 00:09:50.293 CC module/bdev/nvme/bdev_nvme_rpc.o 00:09:50.293 SYMLINK libspdk_bdev_error.so 00:09:50.293 CC module/bdev/nvme/nvme_rpc.o 00:09:50.293 SYMLINK libspdk_blobfs_bdev.so 00:09:50.293 CC module/bdev/passthru/vbdev_passthru.o 00:09:50.293 CC module/bdev/nvme/bdev_mdns_client.o 00:09:50.293 SYMLINK libspdk_bdev_malloc.so 00:09:50.293 CC module/bdev/raid/bdev_raid.o 00:09:50.293 SYMLINK libspdk_bdev_gpt.so 00:09:50.293 CC module/bdev/raid/bdev_raid_rpc.o 00:09:50.293 SYMLINK libspdk_bdev_lvol.so 00:09:50.293 CC module/bdev/raid/bdev_raid_sb.o 00:09:50.293 CC module/bdev/raid/raid0.o 00:09:50.553 CC module/bdev/null/bdev_null_rpc.o 00:09:50.553 CC module/bdev/raid/raid1.o 00:09:50.553 CC module/bdev/raid/concat.o 00:09:50.553 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:09:50.553 LIB libspdk_bdev_null.a 00:09:50.811 SO libspdk_bdev_null.so.6.0 00:09:50.811 CC module/bdev/nvme/vbdev_opal.o 00:09:50.811 SYMLINK libspdk_bdev_null.so 00:09:50.811 CC module/bdev/split/vbdev_split.o 00:09:50.811 LIB libspdk_bdev_passthru.a 00:09:50.811 CC module/bdev/zone_block/vbdev_zone_block.o 00:09:50.811 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:09:50.811 SO libspdk_bdev_passthru.so.6.0 00:09:50.811 SYMLINK libspdk_bdev_passthru.so 00:09:50.811 CC module/bdev/uring/bdev_uring.o 00:09:50.811 CC module/bdev/nvme/vbdev_opal_rpc.o 00:09:50.811 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:09:50.811 CC module/bdev/aio/bdev_aio.o 00:09:51.070 CC module/bdev/aio/bdev_aio_rpc.o 00:09:51.070 CC module/bdev/split/vbdev_split_rpc.o 00:09:51.070 CC module/bdev/uring/bdev_uring_rpc.o 00:09:51.070 LIB libspdk_bdev_zone_block.a 00:09:51.070 LIB libspdk_bdev_split.a 00:09:51.070 SO libspdk_bdev_zone_block.so.6.0 00:09:51.070 SO libspdk_bdev_split.so.6.0 00:09:51.070 SYMLINK libspdk_bdev_zone_block.so 00:09:51.070 LIB libspdk_bdev_raid.a 00:09:51.070 LIB libspdk_bdev_aio.a 00:09:51.070 CC module/bdev/ftl/bdev_ftl.o 00:09:51.070 CC module/bdev/ftl/bdev_ftl_rpc.o 00:09:51.070 SYMLINK libspdk_bdev_split.so 00:09:51.328 LIB libspdk_bdev_uring.a 00:09:51.328 SO libspdk_bdev_aio.so.6.0 00:09:51.328 SO libspdk_bdev_raid.so.6.0 00:09:51.328 SO libspdk_bdev_uring.so.6.0 00:09:51.328 CC module/bdev/iscsi/bdev_iscsi.o 00:09:51.328 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:09:51.328 CC module/bdev/virtio/bdev_virtio_scsi.o 00:09:51.328 CC module/bdev/virtio/bdev_virtio_blk.o 00:09:51.328 SYMLINK libspdk_bdev_aio.so 00:09:51.328 CC module/bdev/virtio/bdev_virtio_rpc.o 00:09:51.328 SYMLINK libspdk_bdev_uring.so 00:09:51.328 SYMLINK libspdk_bdev_raid.so 00:09:51.588 LIB libspdk_bdev_ftl.a 00:09:51.588 SO libspdk_bdev_ftl.so.6.0 00:09:51.588 SYMLINK libspdk_bdev_ftl.so 00:09:51.588 LIB libspdk_bdev_iscsi.a 00:09:51.588 SO libspdk_bdev_iscsi.so.6.0 00:09:51.847 SYMLINK libspdk_bdev_iscsi.so 00:09:51.847 LIB libspdk_bdev_virtio.a 00:09:51.847 SO libspdk_bdev_virtio.so.6.0 00:09:51.847 SYMLINK libspdk_bdev_virtio.so 00:09:52.106 LIB libspdk_bdev_nvme.a 00:09:52.366 SO libspdk_bdev_nvme.so.7.0 00:09:52.366 SYMLINK libspdk_bdev_nvme.so 00:09:52.934 CC module/event/subsystems/sock/sock.o 00:09:52.934 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:09:52.934 CC module/event/subsystems/scheduler/scheduler.o 00:09:52.934 CC module/event/subsystems/vmd/vmd_rpc.o 00:09:52.934 CC module/event/subsystems/vmd/vmd.o 00:09:52.934 CC module/event/subsystems/keyring/keyring.o 00:09:52.934 CC module/event/subsystems/iobuf/iobuf.o 00:09:52.934 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:09:53.193 LIB libspdk_event_vhost_blk.a 00:09:53.193 LIB libspdk_event_scheduler.a 00:09:53.193 LIB libspdk_event_sock.a 00:09:53.193 LIB libspdk_event_vmd.a 00:09:53.193 LIB libspdk_event_keyring.a 00:09:53.193 SO libspdk_event_vhost_blk.so.3.0 00:09:53.193 SO libspdk_event_scheduler.so.4.0 00:09:53.193 SO libspdk_event_sock.so.5.0 00:09:53.193 LIB libspdk_event_iobuf.a 00:09:53.193 SO libspdk_event_keyring.so.1.0 00:09:53.193 SO libspdk_event_vmd.so.6.0 00:09:53.193 SYMLINK libspdk_event_vhost_blk.so 00:09:53.193 SYMLINK libspdk_event_scheduler.so 00:09:53.193 SO libspdk_event_iobuf.so.3.0 00:09:53.193 SYMLINK libspdk_event_sock.so 00:09:53.193 SYMLINK libspdk_event_keyring.so 00:09:53.193 SYMLINK libspdk_event_vmd.so 00:09:53.193 SYMLINK libspdk_event_iobuf.so 00:09:53.761 CC module/event/subsystems/accel/accel.o 00:09:53.761 LIB libspdk_event_accel.a 00:09:53.761 SO libspdk_event_accel.so.6.0 00:09:54.021 SYMLINK libspdk_event_accel.so 00:09:54.279 CC module/event/subsystems/bdev/bdev.o 00:09:54.537 LIB libspdk_event_bdev.a 00:09:54.537 SO libspdk_event_bdev.so.6.0 00:09:54.537 SYMLINK libspdk_event_bdev.so 00:09:54.796 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:09:54.796 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:09:54.796 CC module/event/subsystems/nbd/nbd.o 00:09:54.796 CC module/event/subsystems/ublk/ublk.o 00:09:54.796 CC module/event/subsystems/scsi/scsi.o 00:09:55.056 LIB libspdk_event_nbd.a 00:09:55.056 LIB libspdk_event_ublk.a 00:09:55.056 SO libspdk_event_nbd.so.6.0 00:09:55.056 LIB libspdk_event_scsi.a 00:09:55.056 SO libspdk_event_ublk.so.3.0 00:09:55.056 SO libspdk_event_scsi.so.6.0 00:09:55.056 SYMLINK libspdk_event_nbd.so 00:09:55.056 SYMLINK libspdk_event_ublk.so 00:09:55.056 LIB libspdk_event_nvmf.a 00:09:55.056 SYMLINK libspdk_event_scsi.so 00:09:55.056 SO libspdk_event_nvmf.so.6.0 00:09:55.315 SYMLINK libspdk_event_nvmf.so 00:09:55.573 CC module/event/subsystems/iscsi/iscsi.o 00:09:55.573 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:09:55.573 LIB libspdk_event_iscsi.a 00:09:55.573 LIB libspdk_event_vhost_scsi.a 00:09:55.573 SO libspdk_event_iscsi.so.6.0 00:09:55.573 SO libspdk_event_vhost_scsi.so.3.0 00:09:55.832 SYMLINK libspdk_event_iscsi.so 00:09:55.832 SYMLINK libspdk_event_vhost_scsi.so 00:09:55.832 SO libspdk.so.6.0 00:09:55.832 SYMLINK libspdk.so 00:09:56.400 CXX app/trace/trace.o 00:09:56.400 CC examples/nvme/hello_world/hello_world.o 00:09:56.400 CC examples/accel/perf/accel_perf.o 00:09:56.400 CC examples/vmd/lsvmd/lsvmd.o 00:09:56.400 CC examples/ioat/perf/perf.o 00:09:56.400 CC examples/sock/hello_world/hello_sock.o 00:09:56.400 CC examples/nvmf/nvmf/nvmf.o 00:09:56.400 CC test/accel/dif/dif.o 00:09:56.400 CC examples/bdev/hello_world/hello_bdev.o 00:09:56.400 CC examples/blob/hello_world/hello_blob.o 00:09:56.400 LINK lsvmd 00:09:56.400 LINK hello_world 00:09:56.400 LINK ioat_perf 00:09:56.658 LINK hello_sock 00:09:56.658 LINK hello_bdev 00:09:56.658 LINK hello_blob 00:09:56.658 LINK spdk_trace 00:09:56.658 LINK nvmf 00:09:56.658 LINK accel_perf 00:09:56.658 CC examples/vmd/led/led.o 00:09:56.658 LINK dif 00:09:56.658 CC examples/nvme/reconnect/reconnect.o 00:09:56.658 CC examples/ioat/verify/verify.o 00:09:56.658 CC examples/nvme/nvme_manage/nvme_manage.o 00:09:56.917 LINK led 00:09:56.917 CC examples/bdev/bdevperf/bdevperf.o 00:09:56.917 CC app/trace_record/trace_record.o 00:09:56.917 CC examples/blob/cli/blobcli.o 00:09:57.176 LINK verify 00:09:57.176 CC app/nvmf_tgt/nvmf_main.o 00:09:57.176 CC examples/util/zipf/zipf.o 00:09:57.176 LINK spdk_trace_record 00:09:57.176 CC test/app/bdev_svc/bdev_svc.o 00:09:57.176 LINK reconnect 00:09:57.176 CC test/bdev/bdevio/bdevio.o 00:09:57.436 LINK nvmf_tgt 00:09:57.436 LINK zipf 00:09:57.436 LINK nvme_manage 00:09:57.436 LINK bdev_svc 00:09:57.436 CC test/blobfs/mkfs/mkfs.o 00:09:57.436 CC app/iscsi_tgt/iscsi_tgt.o 00:09:57.436 LINK blobcli 00:09:57.694 CC app/spdk_tgt/spdk_tgt.o 00:09:57.694 CC app/spdk_lspci/spdk_lspci.o 00:09:57.694 LINK bdevperf 00:09:57.694 LINK bdevio 00:09:57.694 CC examples/nvme/arbitration/arbitration.o 00:09:57.694 LINK mkfs 00:09:57.694 LINK iscsi_tgt 00:09:57.694 LINK spdk_lspci 00:09:57.694 CC examples/thread/thread/thread_ex.o 00:09:57.694 LINK spdk_tgt 00:09:57.694 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:09:57.953 CC examples/idxd/perf/perf.o 00:09:57.953 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:09:57.953 TEST_HEADER include/spdk/accel.h 00:09:57.953 TEST_HEADER include/spdk/accel_module.h 00:09:57.953 TEST_HEADER include/spdk/assert.h 00:09:57.953 TEST_HEADER include/spdk/barrier.h 00:09:57.953 TEST_HEADER include/spdk/base64.h 00:09:57.953 TEST_HEADER include/spdk/bdev.h 00:09:57.953 TEST_HEADER include/spdk/bdev_module.h 00:09:57.953 TEST_HEADER include/spdk/bdev_zone.h 00:09:57.953 TEST_HEADER include/spdk/bit_array.h 00:09:57.953 TEST_HEADER include/spdk/bit_pool.h 00:09:57.953 TEST_HEADER include/spdk/blob_bdev.h 00:09:57.953 TEST_HEADER include/spdk/blobfs_bdev.h 00:09:57.953 TEST_HEADER include/spdk/blobfs.h 00:09:57.953 LINK arbitration 00:09:57.953 TEST_HEADER include/spdk/blob.h 00:09:57.953 TEST_HEADER include/spdk/conf.h 00:09:57.953 TEST_HEADER include/spdk/config.h 00:09:57.953 TEST_HEADER include/spdk/cpuset.h 00:09:57.953 TEST_HEADER include/spdk/crc16.h 00:09:57.953 TEST_HEADER include/spdk/crc32.h 00:09:57.953 CC examples/interrupt_tgt/interrupt_tgt.o 00:09:57.953 TEST_HEADER include/spdk/crc64.h 00:09:57.953 TEST_HEADER include/spdk/dif.h 00:09:57.953 TEST_HEADER include/spdk/dma.h 00:09:57.953 TEST_HEADER include/spdk/endian.h 00:09:57.953 TEST_HEADER include/spdk/env_dpdk.h 00:09:57.953 TEST_HEADER include/spdk/env.h 00:09:57.953 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:09:57.953 TEST_HEADER include/spdk/event.h 00:09:57.953 TEST_HEADER include/spdk/fd_group.h 00:09:57.953 TEST_HEADER include/spdk/fd.h 00:09:57.953 TEST_HEADER include/spdk/file.h 00:09:57.953 LINK thread 00:09:57.953 TEST_HEADER include/spdk/ftl.h 00:09:57.953 TEST_HEADER include/spdk/gpt_spec.h 00:09:57.953 TEST_HEADER include/spdk/hexlify.h 00:09:57.953 TEST_HEADER include/spdk/histogram_data.h 00:09:57.953 TEST_HEADER include/spdk/idxd.h 00:09:57.953 TEST_HEADER include/spdk/idxd_spec.h 00:09:57.953 TEST_HEADER include/spdk/init.h 00:09:57.953 TEST_HEADER include/spdk/ioat.h 00:09:57.953 TEST_HEADER include/spdk/ioat_spec.h 00:09:57.953 TEST_HEADER include/spdk/iscsi_spec.h 00:09:57.953 TEST_HEADER include/spdk/json.h 00:09:57.953 TEST_HEADER include/spdk/jsonrpc.h 00:09:57.953 TEST_HEADER include/spdk/keyring.h 00:09:57.953 TEST_HEADER include/spdk/keyring_module.h 00:09:57.953 TEST_HEADER include/spdk/likely.h 00:09:57.953 TEST_HEADER include/spdk/log.h 00:09:57.953 TEST_HEADER include/spdk/lvol.h 00:09:57.953 TEST_HEADER include/spdk/memory.h 00:09:57.953 TEST_HEADER include/spdk/mmio.h 00:09:57.954 TEST_HEADER include/spdk/nbd.h 00:09:57.954 TEST_HEADER include/spdk/notify.h 00:09:57.954 TEST_HEADER include/spdk/nvme.h 00:09:57.954 TEST_HEADER include/spdk/nvme_intel.h 00:09:57.954 TEST_HEADER include/spdk/nvme_ocssd.h 00:09:57.954 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:09:57.954 TEST_HEADER include/spdk/nvme_spec.h 00:09:57.954 TEST_HEADER include/spdk/nvme_zns.h 00:09:57.954 TEST_HEADER include/spdk/nvmf_cmd.h 00:09:57.954 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:09:57.954 TEST_HEADER include/spdk/nvmf.h 00:09:57.954 TEST_HEADER include/spdk/nvmf_spec.h 00:09:57.954 TEST_HEADER include/spdk/nvmf_transport.h 00:09:57.954 TEST_HEADER include/spdk/opal.h 00:09:57.954 TEST_HEADER include/spdk/opal_spec.h 00:09:57.954 TEST_HEADER include/spdk/pci_ids.h 00:09:57.954 TEST_HEADER include/spdk/pipe.h 00:09:57.954 TEST_HEADER include/spdk/queue.h 00:09:57.954 TEST_HEADER include/spdk/reduce.h 00:09:57.954 TEST_HEADER include/spdk/rpc.h 00:09:57.954 TEST_HEADER include/spdk/scheduler.h 00:09:57.954 TEST_HEADER include/spdk/scsi.h 00:09:57.954 TEST_HEADER include/spdk/scsi_spec.h 00:09:57.954 CC app/spdk_nvme_perf/perf.o 00:09:57.954 TEST_HEADER include/spdk/sock.h 00:09:57.954 TEST_HEADER include/spdk/stdinc.h 00:09:57.954 TEST_HEADER include/spdk/string.h 00:09:57.954 TEST_HEADER include/spdk/thread.h 00:09:57.954 TEST_HEADER include/spdk/trace.h 00:09:57.954 TEST_HEADER include/spdk/trace_parser.h 00:09:57.954 TEST_HEADER include/spdk/tree.h 00:09:57.954 TEST_HEADER include/spdk/ublk.h 00:09:57.954 TEST_HEADER include/spdk/util.h 00:09:57.954 TEST_HEADER include/spdk/uuid.h 00:09:57.954 TEST_HEADER include/spdk/version.h 00:09:58.213 TEST_HEADER include/spdk/vfio_user_pci.h 00:09:58.213 TEST_HEADER include/spdk/vfio_user_spec.h 00:09:58.213 TEST_HEADER include/spdk/vhost.h 00:09:58.213 TEST_HEADER include/spdk/vmd.h 00:09:58.213 TEST_HEADER include/spdk/xor.h 00:09:58.213 TEST_HEADER include/spdk/zipf.h 00:09:58.213 CXX test/cpp_headers/accel.o 00:09:58.213 CC test/dma/test_dma/test_dma.o 00:09:58.213 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:09:58.213 LINK nvme_fuzz 00:09:58.213 LINK interrupt_tgt 00:09:58.213 CC examples/nvme/hotplug/hotplug.o 00:09:58.213 LINK idxd_perf 00:09:58.213 CXX test/cpp_headers/accel_module.o 00:09:58.213 CC app/spdk_nvme_identify/identify.o 00:09:58.473 CXX test/cpp_headers/assert.o 00:09:58.473 CXX test/cpp_headers/barrier.o 00:09:58.473 CC app/spdk_nvme_discover/discovery_aer.o 00:09:58.473 LINK hotplug 00:09:58.473 LINK test_dma 00:09:58.473 LINK vhost_fuzz 00:09:58.473 CXX test/cpp_headers/base64.o 00:09:58.473 CC test/env/mem_callbacks/mem_callbacks.o 00:09:58.473 LINK spdk_nvme_discover 00:09:58.743 CC test/env/vtophys/vtophys.o 00:09:58.743 CXX test/cpp_headers/bdev.o 00:09:58.743 CC examples/nvme/cmb_copy/cmb_copy.o 00:09:58.743 CC examples/nvme/abort/abort.o 00:09:58.743 CC test/app/histogram_perf/histogram_perf.o 00:09:58.743 LINK vtophys 00:09:58.743 CC test/app/jsoncat/jsoncat.o 00:09:58.743 LINK spdk_nvme_perf 00:09:59.051 CXX test/cpp_headers/bdev_module.o 00:09:59.051 LINK cmb_copy 00:09:59.051 LINK histogram_perf 00:09:59.051 CC test/app/stub/stub.o 00:09:59.051 LINK jsoncat 00:09:59.051 LINK spdk_nvme_identify 00:09:59.051 LINK abort 00:09:59.051 CXX test/cpp_headers/bdev_zone.o 00:09:59.051 LINK mem_callbacks 00:09:59.051 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:09:59.051 CC test/env/memory/memory_ut.o 00:09:59.051 CC test/env/pci/pci_ut.o 00:09:59.051 LINK stub 00:09:59.415 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:09:59.415 CXX test/cpp_headers/bit_array.o 00:09:59.415 LINK env_dpdk_post_init 00:09:59.415 CC app/spdk_top/spdk_top.o 00:09:59.415 CC app/vhost/vhost.o 00:09:59.415 CXX test/cpp_headers/bit_pool.o 00:09:59.415 LINK iscsi_fuzz 00:09:59.415 LINK pmr_persistence 00:09:59.415 CC test/event/event_perf/event_perf.o 00:09:59.415 LINK pci_ut 00:09:59.675 CXX test/cpp_headers/blob_bdev.o 00:09:59.675 LINK event_perf 00:09:59.675 CC test/lvol/esnap/esnap.o 00:09:59.675 LINK vhost 00:09:59.675 CC test/nvme/aer/aer.o 00:09:59.675 CC app/spdk_dd/spdk_dd.o 00:09:59.675 CXX test/cpp_headers/blobfs_bdev.o 00:09:59.675 CC test/rpc_client/rpc_client_test.o 00:09:59.675 CC test/event/reactor/reactor.o 00:09:59.935 LINK aer 00:09:59.935 CXX test/cpp_headers/blobfs.o 00:09:59.935 LINK rpc_client_test 00:09:59.935 CC test/thread/poller_perf/poller_perf.o 00:09:59.935 LINK reactor 00:09:59.935 CC app/fio/nvme/fio_plugin.o 00:09:59.935 LINK memory_ut 00:09:59.935 LINK poller_perf 00:09:59.935 CXX test/cpp_headers/blob.o 00:10:00.195 LINK spdk_top 00:10:00.195 LINK spdk_dd 00:10:00.195 CC test/nvme/reset/reset.o 00:10:00.195 CC test/event/reactor_perf/reactor_perf.o 00:10:00.195 CC test/event/app_repeat/app_repeat.o 00:10:00.195 CXX test/cpp_headers/conf.o 00:10:00.454 CC test/event/scheduler/scheduler.o 00:10:00.454 LINK reactor_perf 00:10:00.454 CC test/nvme/sgl/sgl.o 00:10:00.454 CC app/fio/bdev/fio_plugin.o 00:10:00.454 CXX test/cpp_headers/config.o 00:10:00.454 LINK app_repeat 00:10:00.454 CXX test/cpp_headers/cpuset.o 00:10:00.454 LINK reset 00:10:00.454 LINK spdk_nvme 00:10:00.454 CXX test/cpp_headers/crc16.o 00:10:00.454 CXX test/cpp_headers/crc32.o 00:10:00.454 CC test/nvme/e2edp/nvme_dp.o 00:10:00.454 LINK scheduler 00:10:00.712 LINK sgl 00:10:00.712 CXX test/cpp_headers/crc64.o 00:10:00.712 CC test/nvme/overhead/overhead.o 00:10:00.712 CC test/nvme/startup/startup.o 00:10:00.712 CXX test/cpp_headers/dif.o 00:10:00.712 CC test/nvme/err_injection/err_injection.o 00:10:00.712 LINK nvme_dp 00:10:00.712 CXX test/cpp_headers/dma.o 00:10:00.712 LINK spdk_bdev 00:10:00.971 CXX test/cpp_headers/endian.o 00:10:00.971 LINK startup 00:10:00.971 CC test/nvme/reserve/reserve.o 00:10:00.971 LINK err_injection 00:10:00.971 CC test/nvme/simple_copy/simple_copy.o 00:10:00.971 LINK overhead 00:10:00.971 CXX test/cpp_headers/env_dpdk.o 00:10:00.971 CC test/nvme/connect_stress/connect_stress.o 00:10:00.971 CC test/nvme/boot_partition/boot_partition.o 00:10:00.971 LINK reserve 00:10:00.971 CXX test/cpp_headers/env.o 00:10:00.971 CC test/nvme/compliance/nvme_compliance.o 00:10:01.229 LINK simple_copy 00:10:01.229 CC test/nvme/fused_ordering/fused_ordering.o 00:10:01.229 CC test/nvme/doorbell_aers/doorbell_aers.o 00:10:01.229 CC test/nvme/fdp/fdp.o 00:10:01.229 LINK connect_stress 00:10:01.229 LINK boot_partition 00:10:01.229 CXX test/cpp_headers/event.o 00:10:01.229 CXX test/cpp_headers/fd_group.o 00:10:01.229 CC test/nvme/cuse/cuse.o 00:10:01.229 LINK fused_ordering 00:10:01.229 LINK doorbell_aers 00:10:01.487 LINK nvme_compliance 00:10:01.487 CXX test/cpp_headers/fd.o 00:10:01.487 CXX test/cpp_headers/file.o 00:10:01.487 CXX test/cpp_headers/ftl.o 00:10:01.487 CXX test/cpp_headers/gpt_spec.o 00:10:01.487 LINK fdp 00:10:01.487 CXX test/cpp_headers/hexlify.o 00:10:01.487 CXX test/cpp_headers/histogram_data.o 00:10:01.487 CXX test/cpp_headers/idxd.o 00:10:01.487 CXX test/cpp_headers/idxd_spec.o 00:10:01.487 CXX test/cpp_headers/init.o 00:10:01.487 CXX test/cpp_headers/ioat.o 00:10:01.487 CXX test/cpp_headers/ioat_spec.o 00:10:01.745 CXX test/cpp_headers/iscsi_spec.o 00:10:01.745 CXX test/cpp_headers/json.o 00:10:01.745 CXX test/cpp_headers/jsonrpc.o 00:10:01.745 CXX test/cpp_headers/keyring.o 00:10:01.745 CXX test/cpp_headers/keyring_module.o 00:10:01.745 CXX test/cpp_headers/likely.o 00:10:01.745 CXX test/cpp_headers/log.o 00:10:01.745 CXX test/cpp_headers/lvol.o 00:10:01.745 CXX test/cpp_headers/memory.o 00:10:01.745 CXX test/cpp_headers/mmio.o 00:10:01.745 CXX test/cpp_headers/nbd.o 00:10:01.745 CXX test/cpp_headers/notify.o 00:10:01.745 CXX test/cpp_headers/nvme.o 00:10:01.745 CXX test/cpp_headers/nvme_intel.o 00:10:01.745 CXX test/cpp_headers/nvme_ocssd.o 00:10:02.005 CXX test/cpp_headers/nvme_ocssd_spec.o 00:10:02.005 CXX test/cpp_headers/nvme_spec.o 00:10:02.005 CXX test/cpp_headers/nvme_zns.o 00:10:02.005 CXX test/cpp_headers/nvmf_cmd.o 00:10:02.005 CXX test/cpp_headers/nvmf_fc_spec.o 00:10:02.005 CXX test/cpp_headers/nvmf.o 00:10:02.005 CXX test/cpp_headers/nvmf_spec.o 00:10:02.005 CXX test/cpp_headers/nvmf_transport.o 00:10:02.005 CXX test/cpp_headers/opal.o 00:10:02.005 CXX test/cpp_headers/opal_spec.o 00:10:02.005 CXX test/cpp_headers/pci_ids.o 00:10:02.264 CXX test/cpp_headers/pipe.o 00:10:02.264 CXX test/cpp_headers/queue.o 00:10:02.264 CXX test/cpp_headers/reduce.o 00:10:02.264 CXX test/cpp_headers/rpc.o 00:10:02.264 CXX test/cpp_headers/scheduler.o 00:10:02.264 CXX test/cpp_headers/scsi.o 00:10:02.264 CXX test/cpp_headers/scsi_spec.o 00:10:02.264 CXX test/cpp_headers/sock.o 00:10:02.264 CXX test/cpp_headers/stdinc.o 00:10:02.264 CXX test/cpp_headers/string.o 00:10:02.264 LINK cuse 00:10:02.264 CXX test/cpp_headers/thread.o 00:10:02.264 CXX test/cpp_headers/trace.o 00:10:02.264 CXX test/cpp_headers/trace_parser.o 00:10:02.264 CXX test/cpp_headers/tree.o 00:10:02.264 CXX test/cpp_headers/ublk.o 00:10:02.523 CXX test/cpp_headers/util.o 00:10:02.523 CXX test/cpp_headers/uuid.o 00:10:02.523 CXX test/cpp_headers/version.o 00:10:02.523 CXX test/cpp_headers/vfio_user_pci.o 00:10:02.523 CXX test/cpp_headers/vfio_user_spec.o 00:10:02.523 CXX test/cpp_headers/vhost.o 00:10:02.523 CXX test/cpp_headers/vmd.o 00:10:02.523 CXX test/cpp_headers/xor.o 00:10:02.523 CXX test/cpp_headers/zipf.o 00:10:03.901 LINK esnap 00:10:04.469 00:10:04.469 real 0m59.859s 00:10:04.469 user 5m44.637s 00:10:04.469 sys 1m27.114s 00:10:04.469 20:01:46 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:10:04.469 20:01:46 -- common/autotest_common.sh@10 -- $ set +x 00:10:04.469 ************************************ 00:10:04.469 END TEST make 00:10:04.469 ************************************ 00:10:04.469 20:01:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:10:04.469 20:01:46 -- pm/common@30 -- $ signal_monitor_resources TERM 00:10:04.469 20:01:46 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:10:04.469 20:01:46 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:04.469 20:01:46 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:10:04.469 20:01:46 -- pm/common@45 -- $ pid=5356 00:10:04.469 20:01:46 -- pm/common@52 -- $ sudo kill -TERM 5356 00:10:04.469 20:01:46 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:04.469 20:01:46 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:10:04.469 20:01:46 -- pm/common@45 -- $ pid=5357 00:10:04.469 20:01:46 -- pm/common@52 -- $ sudo kill -TERM 5357 00:10:04.469 20:01:46 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.469 20:01:46 -- nvmf/common.sh@7 -- # uname -s 00:10:04.469 20:01:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.469 20:01:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.469 20:01:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.469 20:01:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.469 20:01:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.469 20:01:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.469 20:01:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.469 20:01:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.469 20:01:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.469 20:01:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.469 20:01:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:10:04.469 20:01:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:10:04.469 20:01:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.469 20:01:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.469 20:01:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:04.469 20:01:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.469 20:01:46 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.469 20:01:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.469 20:01:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.469 20:01:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.469 20:01:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.469 20:01:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.469 20:01:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.469 20:01:46 -- paths/export.sh@5 -- # export PATH 00:10:04.469 20:01:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.469 20:01:46 -- nvmf/common.sh@47 -- # : 0 00:10:04.469 20:01:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.469 20:01:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.469 20:01:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.469 20:01:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.469 20:01:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.469 20:01:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.469 20:01:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.469 20:01:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.469 20:01:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:10:04.469 20:01:46 -- spdk/autotest.sh@32 -- # uname -s 00:10:04.469 20:01:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:10:04.469 20:01:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:10:04.469 20:01:46 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:04.469 20:01:46 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:10:04.469 20:01:46 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:04.469 20:01:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:10:04.728 20:01:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:10:04.728 20:01:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:10:04.728 20:01:46 -- spdk/autotest.sh@48 -- # udevadm_pid=52361 00:10:04.728 20:01:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:10:04.728 20:01:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:10:04.728 20:01:46 -- pm/common@17 -- # local monitor 00:10:04.728 20:01:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:04.728 20:01:46 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=52363 00:10:04.728 20:01:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:04.728 20:01:46 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=52365 00:10:04.728 20:01:46 -- pm/common@26 -- # sleep 1 00:10:04.728 20:01:46 -- pm/common@21 -- # date +%s 00:10:04.728 20:01:46 -- pm/common@21 -- # date +%s 00:10:04.728 20:01:46 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713988906 00:10:04.728 20:01:46 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713988906 00:10:04.728 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713988906_collect-vmstat.pm.log 00:10:04.728 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713988906_collect-cpu-load.pm.log 00:10:05.677 20:01:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:10:05.677 20:01:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:10:05.677 20:01:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:05.677 20:01:47 -- common/autotest_common.sh@10 -- # set +x 00:10:05.677 20:01:47 -- spdk/autotest.sh@59 -- # create_test_list 00:10:05.677 20:01:47 -- common/autotest_common.sh@734 -- # xtrace_disable 00:10:05.677 20:01:47 -- common/autotest_common.sh@10 -- # set +x 00:10:05.677 20:01:47 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:10:05.677 20:01:47 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:10:05.677 20:01:47 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:10:05.677 20:01:47 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:10:05.677 20:01:47 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:10:05.677 20:01:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:10:05.677 20:01:47 -- common/autotest_common.sh@1441 -- # uname 00:10:05.677 20:01:47 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:10:05.677 20:01:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:10:05.677 20:01:47 -- common/autotest_common.sh@1461 -- # uname 00:10:05.677 20:01:47 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:10:05.677 20:01:47 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:10:05.677 20:01:47 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:10:05.677 20:01:47 -- spdk/autotest.sh@72 -- # hash lcov 00:10:05.677 20:01:47 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:10:05.677 20:01:47 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:10:05.677 --rc lcov_branch_coverage=1 00:10:05.677 --rc lcov_function_coverage=1 00:10:05.677 --rc genhtml_branch_coverage=1 00:10:05.677 --rc genhtml_function_coverage=1 00:10:05.677 --rc genhtml_legend=1 00:10:05.677 --rc geninfo_all_blocks=1 00:10:05.677 ' 00:10:05.677 20:01:47 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:10:05.677 --rc lcov_branch_coverage=1 00:10:05.677 --rc lcov_function_coverage=1 00:10:05.677 --rc genhtml_branch_coverage=1 00:10:05.677 --rc genhtml_function_coverage=1 00:10:05.677 --rc genhtml_legend=1 00:10:05.677 --rc geninfo_all_blocks=1 00:10:05.677 ' 00:10:05.677 20:01:47 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:10:05.677 --rc lcov_branch_coverage=1 00:10:05.677 --rc lcov_function_coverage=1 00:10:05.677 --rc genhtml_branch_coverage=1 00:10:05.677 --rc genhtml_function_coverage=1 00:10:05.677 --rc genhtml_legend=1 00:10:05.677 --rc geninfo_all_blocks=1 00:10:05.677 --no-external' 00:10:05.677 20:01:47 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:10:05.677 --rc lcov_branch_coverage=1 00:10:05.677 --rc lcov_function_coverage=1 00:10:05.677 --rc genhtml_branch_coverage=1 00:10:05.677 --rc genhtml_function_coverage=1 00:10:05.677 --rc genhtml_legend=1 00:10:05.677 --rc geninfo_all_blocks=1 00:10:05.677 --no-external' 00:10:05.677 20:01:47 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:10:05.936 lcov: LCOV version 1.14 00:10:05.936 20:01:47 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:10:14.061 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:10:14.061 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:10:14.061 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:10:14.061 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:10:14.061 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:10:14.061 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:10:19.336 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:19.336 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:10:31.661 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:10:31.661 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:10:31.662 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:10:31.662 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:10:31.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:10:31.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:10:31.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:10:31.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:10:31.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:10:31.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:10:31.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:10:31.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:10:31.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:10:31.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:10:31.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:10:31.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:10:31.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:10:31.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:10:31.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:10:31.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:10:31.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:10:31.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:10:31.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:10:31.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:10:31.921 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:10:31.921 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:10:35.208 20:02:17 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:10:35.208 20:02:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:35.208 20:02:17 -- common/autotest_common.sh@10 -- # set +x 00:10:35.208 20:02:17 -- spdk/autotest.sh@91 -- # rm -f 00:10:35.208 20:02:17 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:36.186 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:36.186 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:10:36.186 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:10:36.186 20:02:18 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:10:36.186 20:02:18 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:10:36.186 20:02:18 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:10:36.186 20:02:18 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:10:36.186 20:02:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:36.186 20:02:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:10:36.186 20:02:18 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:10:36.186 20:02:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:36.186 20:02:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:36.186 20:02:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:36.186 20:02:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:10:36.186 20:02:18 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:10:36.186 20:02:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:36.186 20:02:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:36.186 20:02:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:36.186 20:02:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:10:36.186 20:02:18 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:10:36.186 20:02:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:10:36.186 20:02:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:36.186 20:02:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:36.186 20:02:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:10:36.186 20:02:18 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:10:36.186 20:02:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:10:36.186 20:02:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:36.186 20:02:18 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:10:36.186 20:02:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:36.186 20:02:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:36.186 20:02:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:10:36.186 20:02:18 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:10:36.186 20:02:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:10:36.186 No valid GPT data, bailing 00:10:36.186 20:02:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:36.186 20:02:18 -- scripts/common.sh@391 -- # pt= 00:10:36.186 20:02:18 -- scripts/common.sh@392 -- # return 1 00:10:36.186 20:02:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:10:36.186 1+0 records in 00:10:36.186 1+0 records out 00:10:36.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00584706 s, 179 MB/s 00:10:36.186 20:02:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:36.186 20:02:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:36.186 20:02:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:10:36.186 20:02:18 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:10:36.186 20:02:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:10:36.186 No valid GPT data, bailing 00:10:36.186 20:02:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:10:36.186 20:02:18 -- scripts/common.sh@391 -- # pt= 00:10:36.186 20:02:18 -- scripts/common.sh@392 -- # return 1 00:10:36.186 20:02:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:10:36.186 1+0 records in 00:10:36.186 1+0 records out 00:10:36.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00744881 s, 141 MB/s 00:10:36.186 20:02:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:36.186 20:02:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:36.186 20:02:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:10:36.186 20:02:18 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:10:36.186 20:02:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:10:36.186 No valid GPT data, bailing 00:10:36.186 20:02:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:10:36.450 20:02:18 -- scripts/common.sh@391 -- # pt= 00:10:36.451 20:02:18 -- scripts/common.sh@392 -- # return 1 00:10:36.451 20:02:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:10:36.451 1+0 records in 00:10:36.451 1+0 records out 00:10:36.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435389 s, 241 MB/s 00:10:36.451 20:02:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:36.451 20:02:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:36.451 20:02:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:10:36.451 20:02:18 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:10:36.451 20:02:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:10:36.451 No valid GPT data, bailing 00:10:36.451 20:02:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:10:36.451 20:02:18 -- scripts/common.sh@391 -- # pt= 00:10:36.451 20:02:18 -- scripts/common.sh@392 -- # return 1 00:10:36.451 20:02:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:10:36.451 1+0 records in 00:10:36.451 1+0 records out 00:10:36.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00613736 s, 171 MB/s 00:10:36.451 20:02:18 -- spdk/autotest.sh@118 -- # sync 00:10:36.710 20:02:18 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:10:36.710 20:02:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:10:36.710 20:02:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:10:39.310 20:02:21 -- spdk/autotest.sh@124 -- # uname -s 00:10:39.310 20:02:21 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:10:39.310 20:02:21 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:10:39.310 20:02:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:39.310 20:02:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:39.310 20:02:21 -- common/autotest_common.sh@10 -- # set +x 00:10:39.310 ************************************ 00:10:39.310 START TEST setup.sh 00:10:39.310 ************************************ 00:10:39.310 20:02:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:10:39.310 * Looking for test storage... 00:10:39.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:39.310 20:02:21 -- setup/test-setup.sh@10 -- # uname -s 00:10:39.310 20:02:21 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:10:39.310 20:02:21 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:10:39.310 20:02:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:39.310 20:02:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:39.310 20:02:21 -- common/autotest_common.sh@10 -- # set +x 00:10:39.310 ************************************ 00:10:39.310 START TEST acl 00:10:39.310 ************************************ 00:10:39.310 20:02:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:10:39.569 * Looking for test storage... 00:10:39.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:39.569 20:02:21 -- setup/acl.sh@10 -- # get_zoned_devs 00:10:39.569 20:02:21 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:10:39.569 20:02:21 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:10:39.569 20:02:21 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:10:39.569 20:02:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:39.569 20:02:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:10:39.569 20:02:21 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:10:39.569 20:02:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:39.569 20:02:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:39.569 20:02:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:39.569 20:02:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:10:39.569 20:02:21 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:10:39.569 20:02:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:39.569 20:02:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:39.569 20:02:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:39.569 20:02:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:10:39.569 20:02:21 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:10:39.569 20:02:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:10:39.569 20:02:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:39.569 20:02:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:39.569 20:02:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:10:39.569 20:02:21 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:10:39.569 20:02:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:10:39.569 20:02:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:39.569 20:02:21 -- setup/acl.sh@12 -- # devs=() 00:10:39.569 20:02:21 -- setup/acl.sh@12 -- # declare -a devs 00:10:39.569 20:02:21 -- setup/acl.sh@13 -- # drivers=() 00:10:39.569 20:02:21 -- setup/acl.sh@13 -- # declare -A drivers 00:10:39.569 20:02:21 -- setup/acl.sh@51 -- # setup reset 00:10:39.569 20:02:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:39.569 20:02:21 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:40.507 20:02:22 -- setup/acl.sh@52 -- # collect_setup_devs 00:10:40.507 20:02:22 -- setup/acl.sh@16 -- # local dev driver 00:10:40.507 20:02:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:40.507 20:02:22 -- setup/acl.sh@15 -- # setup output status 00:10:40.507 20:02:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:40.507 20:02:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:41.080 20:02:23 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:10:41.080 20:02:23 -- setup/acl.sh@19 -- # continue 00:10:41.080 20:02:23 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:41.080 Hugepages 00:10:41.080 node hugesize free / total 00:10:41.080 20:02:23 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:10:41.080 20:02:23 -- setup/acl.sh@19 -- # continue 00:10:41.080 20:02:23 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:41.080 00:10:41.080 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:41.080 20:02:23 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:10:41.080 20:02:23 -- setup/acl.sh@19 -- # continue 00:10:41.080 20:02:23 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:41.338 20:02:23 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:10:41.338 20:02:23 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:10:41.338 20:02:23 -- setup/acl.sh@20 -- # continue 00:10:41.338 20:02:23 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:41.338 20:02:23 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:10:41.338 20:02:23 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:10:41.338 20:02:23 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:10:41.338 20:02:23 -- setup/acl.sh@22 -- # devs+=("$dev") 00:10:41.338 20:02:23 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:10:41.338 20:02:23 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:41.338 20:02:23 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:10:41.338 20:02:23 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:10:41.338 20:02:23 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:10:41.339 20:02:23 -- setup/acl.sh@22 -- # devs+=("$dev") 00:10:41.339 20:02:23 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:10:41.339 20:02:23 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:41.339 20:02:23 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:10:41.339 20:02:23 -- setup/acl.sh@54 -- # run_test denied denied 00:10:41.339 20:02:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:41.339 20:02:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:41.339 20:02:23 -- common/autotest_common.sh@10 -- # set +x 00:10:41.598 ************************************ 00:10:41.598 START TEST denied 00:10:41.598 ************************************ 00:10:41.598 20:02:23 -- common/autotest_common.sh@1111 -- # denied 00:10:41.598 20:02:23 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:10:41.598 20:02:23 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:10:41.598 20:02:23 -- setup/acl.sh@38 -- # setup output config 00:10:41.598 20:02:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:41.598 20:02:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:42.536 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:10:42.536 20:02:24 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:10:42.536 20:02:24 -- setup/acl.sh@28 -- # local dev driver 00:10:42.536 20:02:24 -- setup/acl.sh@30 -- # for dev in "$@" 00:10:42.536 20:02:24 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:10:42.536 20:02:24 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:10:42.536 20:02:24 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:10:42.536 20:02:24 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:10:42.536 20:02:24 -- setup/acl.sh@41 -- # setup reset 00:10:42.536 20:02:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:42.536 20:02:24 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:43.104 00:10:43.104 real 0m1.646s 00:10:43.104 user 0m0.626s 00:10:43.104 sys 0m0.982s 00:10:43.104 20:02:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:43.104 20:02:25 -- common/autotest_common.sh@10 -- # set +x 00:10:43.104 ************************************ 00:10:43.104 END TEST denied 00:10:43.104 ************************************ 00:10:43.104 20:02:25 -- setup/acl.sh@55 -- # run_test allowed allowed 00:10:43.104 20:02:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:43.104 20:02:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:43.104 20:02:25 -- common/autotest_common.sh@10 -- # set +x 00:10:43.363 ************************************ 00:10:43.363 START TEST allowed 00:10:43.363 ************************************ 00:10:43.363 20:02:25 -- common/autotest_common.sh@1111 -- # allowed 00:10:43.363 20:02:25 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:10:43.363 20:02:25 -- setup/acl.sh@45 -- # setup output config 00:10:43.363 20:02:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:43.363 20:02:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:43.363 20:02:25 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:10:44.299 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:44.299 20:02:26 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:10:44.299 20:02:26 -- setup/acl.sh@28 -- # local dev driver 00:10:44.299 20:02:26 -- setup/acl.sh@30 -- # for dev in "$@" 00:10:44.299 20:02:26 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:10:44.299 20:02:26 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:10:44.299 20:02:26 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:10:44.299 20:02:26 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:10:44.299 20:02:26 -- setup/acl.sh@48 -- # setup reset 00:10:44.300 20:02:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:44.300 20:02:26 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:44.865 00:10:44.865 real 0m1.687s 00:10:44.865 user 0m0.645s 00:10:44.865 sys 0m1.033s 00:10:44.865 20:02:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:44.865 20:02:27 -- common/autotest_common.sh@10 -- # set +x 00:10:44.865 ************************************ 00:10:44.865 END TEST allowed 00:10:44.865 ************************************ 00:10:45.125 ************************************ 00:10:45.125 END TEST acl 00:10:45.125 ************************************ 00:10:45.125 00:10:45.125 real 0m5.626s 00:10:45.125 user 0m2.221s 00:10:45.125 sys 0m3.355s 00:10:45.125 20:02:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:45.125 20:02:27 -- common/autotest_common.sh@10 -- # set +x 00:10:45.125 20:02:27 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:10:45.125 20:02:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:45.125 20:02:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:45.125 20:02:27 -- common/autotest_common.sh@10 -- # set +x 00:10:45.125 ************************************ 00:10:45.125 START TEST hugepages 00:10:45.125 ************************************ 00:10:45.125 20:02:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:10:45.125 * Looking for test storage... 00:10:45.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:45.412 20:02:27 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:10:45.412 20:02:27 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:10:45.412 20:02:27 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:10:45.412 20:02:27 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:10:45.412 20:02:27 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:10:45.412 20:02:27 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:10:45.412 20:02:27 -- setup/common.sh@17 -- # local get=Hugepagesize 00:10:45.412 20:02:27 -- setup/common.sh@18 -- # local node= 00:10:45.413 20:02:27 -- setup/common.sh@19 -- # local var val 00:10:45.413 20:02:27 -- setup/common.sh@20 -- # local mem_f mem 00:10:45.413 20:02:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:45.413 20:02:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:45.413 20:02:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:45.413 20:02:27 -- setup/common.sh@28 -- # mapfile -t mem 00:10:45.413 20:02:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 5606292 kB' 'MemAvailable: 7410248 kB' 'Buffers: 2436 kB' 'Cached: 2016624 kB' 'SwapCached: 0 kB' 'Active: 834572 kB' 'Inactive: 1290724 kB' 'Active(anon): 116728 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290724 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 704 kB' 'Writeback: 0 kB' 'AnonPages: 107616 kB' 'Mapped: 48848 kB' 'Shmem: 10488 kB' 'KReclaimable: 64652 kB' 'Slab: 141776 kB' 'SReclaimable: 64652 kB' 'SUnreclaim: 77124 kB' 'KernelStack: 6332 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 349920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54936 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.413 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.413 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # continue 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.414 20:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.414 20:02:27 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:45.414 20:02:27 -- setup/common.sh@33 -- # echo 2048 00:10:45.414 20:02:27 -- setup/common.sh@33 -- # return 0 00:10:45.414 20:02:27 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:10:45.414 20:02:27 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:10:45.414 20:02:27 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:10:45.414 20:02:27 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:10:45.414 20:02:27 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:10:45.414 20:02:27 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:10:45.414 20:02:27 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:10:45.414 20:02:27 -- setup/hugepages.sh@207 -- # get_nodes 00:10:45.414 20:02:27 -- setup/hugepages.sh@27 -- # local node 00:10:45.414 20:02:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:45.414 20:02:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:10:45.414 20:02:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:45.414 20:02:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:45.414 20:02:27 -- setup/hugepages.sh@208 -- # clear_hp 00:10:45.414 20:02:27 -- setup/hugepages.sh@37 -- # local node hp 00:10:45.414 20:02:27 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:10:45.414 20:02:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:45.414 20:02:27 -- setup/hugepages.sh@41 -- # echo 0 00:10:45.414 20:02:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:45.414 20:02:27 -- setup/hugepages.sh@41 -- # echo 0 00:10:45.414 20:02:27 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:10:45.414 20:02:27 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:10:45.414 20:02:27 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:10:45.414 20:02:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:45.414 20:02:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:45.414 20:02:27 -- common/autotest_common.sh@10 -- # set +x 00:10:45.414 ************************************ 00:10:45.414 START TEST default_setup 00:10:45.414 ************************************ 00:10:45.414 20:02:27 -- common/autotest_common.sh@1111 -- # default_setup 00:10:45.414 20:02:27 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:10:45.414 20:02:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:10:45.414 20:02:27 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:10:45.414 20:02:27 -- setup/hugepages.sh@51 -- # shift 00:10:45.414 20:02:27 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:10:45.414 20:02:27 -- setup/hugepages.sh@52 -- # local node_ids 00:10:45.414 20:02:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:45.414 20:02:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:10:45.414 20:02:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:10:45.414 20:02:27 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:10:45.414 20:02:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:10:45.414 20:02:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:45.414 20:02:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:45.414 20:02:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:45.414 20:02:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:45.414 20:02:27 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:10:45.414 20:02:27 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:10:45.414 20:02:27 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:10:45.414 20:02:27 -- setup/hugepages.sh@73 -- # return 0 00:10:45.414 20:02:27 -- setup/hugepages.sh@137 -- # setup output 00:10:45.414 20:02:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:45.414 20:02:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:45.984 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:46.246 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:46.246 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:46.246 20:02:28 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:10:46.246 20:02:28 -- setup/hugepages.sh@89 -- # local node 00:10:46.246 20:02:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:10:46.246 20:02:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:10:46.246 20:02:28 -- setup/hugepages.sh@92 -- # local surp 00:10:46.246 20:02:28 -- setup/hugepages.sh@93 -- # local resv 00:10:46.246 20:02:28 -- setup/hugepages.sh@94 -- # local anon 00:10:46.246 20:02:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:46.246 20:02:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:46.246 20:02:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:46.246 20:02:28 -- setup/common.sh@18 -- # local node= 00:10:46.246 20:02:28 -- setup/common.sh@19 -- # local var val 00:10:46.246 20:02:28 -- setup/common.sh@20 -- # local mem_f mem 00:10:46.246 20:02:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:46.246 20:02:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:46.246 20:02:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:46.246 20:02:28 -- setup/common.sh@28 -- # mapfile -t mem 00:10:46.246 20:02:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7648076 kB' 'MemAvailable: 9451940 kB' 'Buffers: 2436 kB' 'Cached: 2016648 kB' 'SwapCached: 0 kB' 'Active: 851020 kB' 'Inactive: 1290768 kB' 'Active(anon): 133176 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 868 kB' 'Writeback: 0 kB' 'AnonPages: 124300 kB' 'Mapped: 49020 kB' 'Shmem: 10464 kB' 'KReclaimable: 64380 kB' 'Slab: 141656 kB' 'SReclaimable: 64380 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 6368 kB' 'PageTables: 4616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 366208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.246 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.246 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.247 20:02:28 -- setup/common.sh@33 -- # echo 0 00:10:46.247 20:02:28 -- setup/common.sh@33 -- # return 0 00:10:46.247 20:02:28 -- setup/hugepages.sh@97 -- # anon=0 00:10:46.247 20:02:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:46.247 20:02:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:46.247 20:02:28 -- setup/common.sh@18 -- # local node= 00:10:46.247 20:02:28 -- setup/common.sh@19 -- # local var val 00:10:46.247 20:02:28 -- setup/common.sh@20 -- # local mem_f mem 00:10:46.247 20:02:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:46.247 20:02:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:46.247 20:02:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:46.247 20:02:28 -- setup/common.sh@28 -- # mapfile -t mem 00:10:46.247 20:02:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7648856 kB' 'MemAvailable: 9452720 kB' 'Buffers: 2436 kB' 'Cached: 2016648 kB' 'SwapCached: 0 kB' 'Active: 850744 kB' 'Inactive: 1290768 kB' 'Active(anon): 132900 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 868 kB' 'Writeback: 0 kB' 'AnonPages: 123992 kB' 'Mapped: 48892 kB' 'Shmem: 10464 kB' 'KReclaimable: 64380 kB' 'Slab: 141656 kB' 'SReclaimable: 64380 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 6368 kB' 'PageTables: 4604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 366208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.247 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.247 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.248 20:02:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.248 20:02:28 -- setup/common.sh@33 -- # echo 0 00:10:46.248 20:02:28 -- setup/common.sh@33 -- # return 0 00:10:46.248 20:02:28 -- setup/hugepages.sh@99 -- # surp=0 00:10:46.248 20:02:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:46.248 20:02:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:46.248 20:02:28 -- setup/common.sh@18 -- # local node= 00:10:46.248 20:02:28 -- setup/common.sh@19 -- # local var val 00:10:46.248 20:02:28 -- setup/common.sh@20 -- # local mem_f mem 00:10:46.248 20:02:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:46.248 20:02:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:46.248 20:02:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:46.248 20:02:28 -- setup/common.sh@28 -- # mapfile -t mem 00:10:46.248 20:02:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:46.248 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7649024 kB' 'MemAvailable: 9452888 kB' 'Buffers: 2436 kB' 'Cached: 2016648 kB' 'SwapCached: 0 kB' 'Active: 850516 kB' 'Inactive: 1290768 kB' 'Active(anon): 132672 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 868 kB' 'Writeback: 0 kB' 'AnonPages: 123792 kB' 'Mapped: 48888 kB' 'Shmem: 10464 kB' 'KReclaimable: 64380 kB' 'Slab: 141656 kB' 'SReclaimable: 64380 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 6320 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 366208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.249 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.249 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.250 20:02:28 -- setup/common.sh@33 -- # echo 0 00:10:46.250 20:02:28 -- setup/common.sh@33 -- # return 0 00:10:46.250 20:02:28 -- setup/hugepages.sh@100 -- # resv=0 00:10:46.250 20:02:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:46.250 nr_hugepages=1024 00:10:46.250 resv_hugepages=0 00:10:46.250 20:02:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:46.250 surplus_hugepages=0 00:10:46.250 20:02:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:46.250 anon_hugepages=0 00:10:46.250 20:02:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:46.250 20:02:28 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:46.250 20:02:28 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:46.250 20:02:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:46.250 20:02:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:46.250 20:02:28 -- setup/common.sh@18 -- # local node= 00:10:46.250 20:02:28 -- setup/common.sh@19 -- # local var val 00:10:46.250 20:02:28 -- setup/common.sh@20 -- # local mem_f mem 00:10:46.250 20:02:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:46.250 20:02:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:46.250 20:02:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:46.250 20:02:28 -- setup/common.sh@28 -- # mapfile -t mem 00:10:46.250 20:02:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7649024 kB' 'MemAvailable: 9452888 kB' 'Buffers: 2436 kB' 'Cached: 2016648 kB' 'SwapCached: 0 kB' 'Active: 850812 kB' 'Inactive: 1290768 kB' 'Active(anon): 132968 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290768 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 868 kB' 'Writeback: 0 kB' 'AnonPages: 124080 kB' 'Mapped: 48888 kB' 'Shmem: 10464 kB' 'KReclaimable: 64380 kB' 'Slab: 141656 kB' 'SReclaimable: 64380 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 6320 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 366208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.250 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.250 20:02:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.251 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.251 20:02:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.251 20:02:28 -- setup/common.sh@33 -- # echo 1024 00:10:46.251 20:02:28 -- setup/common.sh@33 -- # return 0 00:10:46.251 20:02:28 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:46.251 20:02:28 -- setup/hugepages.sh@112 -- # get_nodes 00:10:46.251 20:02:28 -- setup/hugepages.sh@27 -- # local node 00:10:46.251 20:02:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:46.251 20:02:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:46.251 20:02:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:46.252 20:02:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:46.252 20:02:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:46.252 20:02:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:46.252 20:02:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:46.252 20:02:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:46.252 20:02:28 -- setup/common.sh@18 -- # local node=0 00:10:46.252 20:02:28 -- setup/common.sh@19 -- # local var val 00:10:46.252 20:02:28 -- setup/common.sh@20 -- # local mem_f mem 00:10:46.252 20:02:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:46.252 20:02:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:46.252 20:02:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:46.252 20:02:28 -- setup/common.sh@28 -- # mapfile -t mem 00:10:46.252 20:02:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.252 20:02:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7648520 kB' 'MemUsed: 4593456 kB' 'SwapCached: 0 kB' 'Active: 850532 kB' 'Inactive: 1290772 kB' 'Active(anon): 132688 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290772 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 868 kB' 'Writeback: 0 kB' 'FilePages: 2019084 kB' 'Mapped: 48888 kB' 'AnonPages: 123920 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64380 kB' 'Slab: 141632 kB' 'SReclaimable: 64380 kB' 'SUnreclaim: 77252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.252 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.252 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # continue 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.512 20:02:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.512 20:02:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.512 20:02:28 -- setup/common.sh@33 -- # echo 0 00:10:46.512 20:02:28 -- setup/common.sh@33 -- # return 0 00:10:46.512 20:02:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:46.512 20:02:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:46.512 20:02:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:46.512 20:02:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:46.512 20:02:28 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:46.512 node0=1024 expecting 1024 00:10:46.512 20:02:28 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:46.512 00:10:46.512 real 0m1.010s 00:10:46.512 user 0m0.431s 00:10:46.512 sys 0m0.557s 00:10:46.512 20:02:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:46.512 20:02:28 -- common/autotest_common.sh@10 -- # set +x 00:10:46.512 ************************************ 00:10:46.512 END TEST default_setup 00:10:46.512 ************************************ 00:10:46.512 20:02:28 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:10:46.512 20:02:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:46.512 20:02:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:46.512 20:02:28 -- common/autotest_common.sh@10 -- # set +x 00:10:46.512 ************************************ 00:10:46.512 START TEST per_node_1G_alloc 00:10:46.512 ************************************ 00:10:46.512 20:02:28 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:10:46.512 20:02:28 -- setup/hugepages.sh@143 -- # local IFS=, 00:10:46.512 20:02:28 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:10:46.512 20:02:28 -- setup/hugepages.sh@49 -- # local size=1048576 00:10:46.512 20:02:28 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:10:46.512 20:02:28 -- setup/hugepages.sh@51 -- # shift 00:10:46.512 20:02:28 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:10:46.512 20:02:28 -- setup/hugepages.sh@52 -- # local node_ids 00:10:46.512 20:02:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:46.512 20:02:28 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:10:46.512 20:02:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:10:46.512 20:02:28 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:10:46.512 20:02:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:10:46.512 20:02:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:10:46.512 20:02:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:46.512 20:02:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:46.512 20:02:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:46.512 20:02:28 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:10:46.512 20:02:28 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:10:46.512 20:02:28 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:10:46.512 20:02:28 -- setup/hugepages.sh@73 -- # return 0 00:10:46.512 20:02:28 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:10:46.512 20:02:28 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:10:46.512 20:02:28 -- setup/hugepages.sh@146 -- # setup output 00:10:46.512 20:02:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:46.512 20:02:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:47.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:47.084 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:47.084 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:47.084 20:02:29 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:10:47.084 20:02:29 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:10:47.084 20:02:29 -- setup/hugepages.sh@89 -- # local node 00:10:47.084 20:02:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:10:47.084 20:02:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:10:47.084 20:02:29 -- setup/hugepages.sh@92 -- # local surp 00:10:47.084 20:02:29 -- setup/hugepages.sh@93 -- # local resv 00:10:47.084 20:02:29 -- setup/hugepages.sh@94 -- # local anon 00:10:47.084 20:02:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:47.084 20:02:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:47.084 20:02:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:47.084 20:02:29 -- setup/common.sh@18 -- # local node= 00:10:47.084 20:02:29 -- setup/common.sh@19 -- # local var val 00:10:47.084 20:02:29 -- setup/common.sh@20 -- # local mem_f mem 00:10:47.084 20:02:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.084 20:02:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:47.084 20:02:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:47.084 20:02:29 -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.084 20:02:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.084 20:02:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8702696 kB' 'MemAvailable: 10506588 kB' 'Buffers: 2436 kB' 'Cached: 2016652 kB' 'SwapCached: 0 kB' 'Active: 850720 kB' 'Inactive: 1290780 kB' 'Active(anon): 132876 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290780 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1028 kB' 'Writeback: 0 kB' 'AnonPages: 124240 kB' 'Mapped: 49024 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 141676 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77264 kB' 'KernelStack: 6276 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 366348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.084 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.084 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.085 20:02:29 -- setup/common.sh@33 -- # echo 0 00:10:47.085 20:02:29 -- setup/common.sh@33 -- # return 0 00:10:47.085 20:02:29 -- setup/hugepages.sh@97 -- # anon=0 00:10:47.085 20:02:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:47.085 20:02:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:47.085 20:02:29 -- setup/common.sh@18 -- # local node= 00:10:47.085 20:02:29 -- setup/common.sh@19 -- # local var val 00:10:47.085 20:02:29 -- setup/common.sh@20 -- # local mem_f mem 00:10:47.085 20:02:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.085 20:02:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:47.085 20:02:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:47.085 20:02:29 -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.085 20:02:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8703328 kB' 'MemAvailable: 10507220 kB' 'Buffers: 2436 kB' 'Cached: 2016652 kB' 'SwapCached: 0 kB' 'Active: 850544 kB' 'Inactive: 1290780 kB' 'Active(anon): 132700 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290780 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1028 kB' 'Writeback: 0 kB' 'AnonPages: 124092 kB' 'Mapped: 48896 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 141672 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77260 kB' 'KernelStack: 6320 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 366348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.085 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.085 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.086 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.086 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.087 20:02:29 -- setup/common.sh@33 -- # echo 0 00:10:47.087 20:02:29 -- setup/common.sh@33 -- # return 0 00:10:47.087 20:02:29 -- setup/hugepages.sh@99 -- # surp=0 00:10:47.087 20:02:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:47.087 20:02:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:47.087 20:02:29 -- setup/common.sh@18 -- # local node= 00:10:47.087 20:02:29 -- setup/common.sh@19 -- # local var val 00:10:47.087 20:02:29 -- setup/common.sh@20 -- # local mem_f mem 00:10:47.087 20:02:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.087 20:02:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:47.087 20:02:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:47.087 20:02:29 -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.087 20:02:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8703960 kB' 'MemAvailable: 10507852 kB' 'Buffers: 2436 kB' 'Cached: 2016652 kB' 'SwapCached: 0 kB' 'Active: 850576 kB' 'Inactive: 1290780 kB' 'Active(anon): 132732 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290780 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1028 kB' 'Writeback: 0 kB' 'AnonPages: 124100 kB' 'Mapped: 48896 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 141672 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77260 kB' 'KernelStack: 6320 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 366348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.087 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.087 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.088 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.088 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.089 20:02:29 -- setup/common.sh@33 -- # echo 0 00:10:47.089 20:02:29 -- setup/common.sh@33 -- # return 0 00:10:47.089 20:02:29 -- setup/hugepages.sh@100 -- # resv=0 00:10:47.089 20:02:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:10:47.089 nr_hugepages=512 00:10:47.089 resv_hugepages=0 00:10:47.089 20:02:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:47.089 surplus_hugepages=0 00:10:47.089 20:02:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:47.089 anon_hugepages=0 00:10:47.089 20:02:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:47.089 20:02:29 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:10:47.089 20:02:29 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:10:47.089 20:02:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:47.089 20:02:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:47.089 20:02:29 -- setup/common.sh@18 -- # local node= 00:10:47.089 20:02:29 -- setup/common.sh@19 -- # local var val 00:10:47.089 20:02:29 -- setup/common.sh@20 -- # local mem_f mem 00:10:47.089 20:02:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.089 20:02:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:47.089 20:02:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:47.089 20:02:29 -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.089 20:02:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8703960 kB' 'MemAvailable: 10507852 kB' 'Buffers: 2436 kB' 'Cached: 2016652 kB' 'SwapCached: 0 kB' 'Active: 850500 kB' 'Inactive: 1290780 kB' 'Active(anon): 132656 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290780 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1028 kB' 'Writeback: 0 kB' 'AnonPages: 124020 kB' 'Mapped: 48896 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 141672 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77260 kB' 'KernelStack: 6304 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 366348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.089 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.089 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.090 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.090 20:02:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.090 20:02:29 -- setup/common.sh@33 -- # echo 512 00:10:47.090 20:02:29 -- setup/common.sh@33 -- # return 0 00:10:47.090 20:02:29 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:10:47.090 20:02:29 -- setup/hugepages.sh@112 -- # get_nodes 00:10:47.091 20:02:29 -- setup/hugepages.sh@27 -- # local node 00:10:47.091 20:02:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:47.091 20:02:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:10:47.091 20:02:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:47.091 20:02:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:47.091 20:02:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:47.091 20:02:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:47.091 20:02:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:47.091 20:02:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:47.091 20:02:29 -- setup/common.sh@18 -- # local node=0 00:10:47.091 20:02:29 -- setup/common.sh@19 -- # local var val 00:10:47.091 20:02:29 -- setup/common.sh@20 -- # local mem_f mem 00:10:47.091 20:02:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.091 20:02:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:47.091 20:02:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:47.091 20:02:29 -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.091 20:02:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8703960 kB' 'MemUsed: 3538016 kB' 'SwapCached: 0 kB' 'Active: 850504 kB' 'Inactive: 1290780 kB' 'Active(anon): 132660 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290780 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1028 kB' 'Writeback: 0 kB' 'FilePages: 2019088 kB' 'Mapped: 48896 kB' 'AnonPages: 124020 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64412 kB' 'Slab: 141672 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.091 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.091 20:02:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.092 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.092 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.092 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.092 20:02:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.092 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.092 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.092 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.092 20:02:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.092 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.092 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.092 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.092 20:02:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.092 20:02:29 -- setup/common.sh@33 -- # echo 0 00:10:47.092 20:02:29 -- setup/common.sh@33 -- # return 0 00:10:47.092 20:02:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:47.092 20:02:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:47.092 20:02:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:47.092 20:02:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:47.092 20:02:29 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:10:47.092 node0=512 expecting 512 00:10:47.092 20:02:29 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:10:47.092 00:10:47.092 real 0m0.662s 00:10:47.092 user 0m0.321s 00:10:47.092 sys 0m0.385s 00:10:47.092 20:02:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:47.092 20:02:29 -- common/autotest_common.sh@10 -- # set +x 00:10:47.092 ************************************ 00:10:47.092 END TEST per_node_1G_alloc 00:10:47.092 ************************************ 00:10:47.092 20:02:29 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:10:47.092 20:02:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:47.092 20:02:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.092 20:02:29 -- common/autotest_common.sh@10 -- # set +x 00:10:47.351 ************************************ 00:10:47.351 START TEST even_2G_alloc 00:10:47.351 ************************************ 00:10:47.351 20:02:29 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:10:47.351 20:02:29 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:10:47.351 20:02:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:10:47.351 20:02:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:10:47.351 20:02:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:47.351 20:02:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:10:47.351 20:02:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:10:47.351 20:02:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:47.351 20:02:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:10:47.351 20:02:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:47.351 20:02:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:47.351 20:02:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:47.351 20:02:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:47.351 20:02:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:47.351 20:02:29 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:10:47.351 20:02:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:47.351 20:02:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:10:47.351 20:02:29 -- setup/hugepages.sh@83 -- # : 0 00:10:47.351 20:02:29 -- setup/hugepages.sh@84 -- # : 0 00:10:47.351 20:02:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:47.351 20:02:29 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:10:47.351 20:02:29 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:10:47.351 20:02:29 -- setup/hugepages.sh@153 -- # setup output 00:10:47.351 20:02:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:47.351 20:02:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:47.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:47.922 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:47.922 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:47.922 20:02:29 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:10:47.922 20:02:29 -- setup/hugepages.sh@89 -- # local node 00:10:47.922 20:02:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:10:47.922 20:02:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:10:47.922 20:02:29 -- setup/hugepages.sh@92 -- # local surp 00:10:47.922 20:02:29 -- setup/hugepages.sh@93 -- # local resv 00:10:47.922 20:02:29 -- setup/hugepages.sh@94 -- # local anon 00:10:47.922 20:02:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:47.922 20:02:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:47.922 20:02:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:47.922 20:02:29 -- setup/common.sh@18 -- # local node= 00:10:47.922 20:02:29 -- setup/common.sh@19 -- # local var val 00:10:47.922 20:02:29 -- setup/common.sh@20 -- # local mem_f mem 00:10:47.922 20:02:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.922 20:02:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:47.922 20:02:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:47.922 20:02:29 -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.922 20:02:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7662500 kB' 'MemAvailable: 9466396 kB' 'Buffers: 2436 kB' 'Cached: 2016656 kB' 'SwapCached: 0 kB' 'Active: 851068 kB' 'Inactive: 1290784 kB' 'Active(anon): 133224 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1188 kB' 'Writeback: 0 kB' 'AnonPages: 124356 kB' 'Mapped: 49040 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 141692 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77280 kB' 'KernelStack: 6336 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 366644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.922 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.922 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.923 20:02:29 -- setup/common.sh@33 -- # echo 0 00:10:47.923 20:02:29 -- setup/common.sh@33 -- # return 0 00:10:47.923 20:02:29 -- setup/hugepages.sh@97 -- # anon=0 00:10:47.923 20:02:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:47.923 20:02:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:47.923 20:02:29 -- setup/common.sh@18 -- # local node= 00:10:47.923 20:02:29 -- setup/common.sh@19 -- # local var val 00:10:47.923 20:02:29 -- setup/common.sh@20 -- # local mem_f mem 00:10:47.923 20:02:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.923 20:02:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:47.923 20:02:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:47.923 20:02:29 -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.923 20:02:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7662500 kB' 'MemAvailable: 9466396 kB' 'Buffers: 2436 kB' 'Cached: 2016656 kB' 'SwapCached: 0 kB' 'Active: 850784 kB' 'Inactive: 1290784 kB' 'Active(anon): 132940 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1188 kB' 'Writeback: 0 kB' 'AnonPages: 124072 kB' 'Mapped: 48912 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 141688 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 6304 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 366644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54936 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.923 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.923 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:29 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.924 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.924 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.924 20:02:30 -- setup/common.sh@33 -- # echo 0 00:10:47.924 20:02:30 -- setup/common.sh@33 -- # return 0 00:10:47.924 20:02:30 -- setup/hugepages.sh@99 -- # surp=0 00:10:47.924 20:02:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:47.924 20:02:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:47.924 20:02:30 -- setup/common.sh@18 -- # local node= 00:10:47.924 20:02:30 -- setup/common.sh@19 -- # local var val 00:10:47.924 20:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:10:47.924 20:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.925 20:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:47.925 20:02:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:47.925 20:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.925 20:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7662500 kB' 'MemAvailable: 9466396 kB' 'Buffers: 2436 kB' 'Cached: 2016656 kB' 'SwapCached: 0 kB' 'Active: 850876 kB' 'Inactive: 1290784 kB' 'Active(anon): 133032 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1188 kB' 'Writeback: 0 kB' 'AnonPages: 124176 kB' 'Mapped: 48912 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 141688 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 6320 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 366644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54936 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.925 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.925 20:02:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.926 20:02:30 -- setup/common.sh@33 -- # echo 0 00:10:47.926 20:02:30 -- setup/common.sh@33 -- # return 0 00:10:47.926 20:02:30 -- setup/hugepages.sh@100 -- # resv=0 00:10:47.926 20:02:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:47.926 nr_hugepages=1024 00:10:47.926 20:02:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:47.926 resv_hugepages=0 00:10:47.926 surplus_hugepages=0 00:10:47.926 20:02:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:47.926 anon_hugepages=0 00:10:47.926 20:02:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:47.926 20:02:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:47.926 20:02:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:47.926 20:02:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:47.926 20:02:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:47.926 20:02:30 -- setup/common.sh@18 -- # local node= 00:10:47.926 20:02:30 -- setup/common.sh@19 -- # local var val 00:10:47.926 20:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:10:47.926 20:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.926 20:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:47.926 20:02:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:47.926 20:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.926 20:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7662500 kB' 'MemAvailable: 9466396 kB' 'Buffers: 2436 kB' 'Cached: 2016656 kB' 'SwapCached: 0 kB' 'Active: 850788 kB' 'Inactive: 1290784 kB' 'Active(anon): 132944 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1188 kB' 'Writeback: 0 kB' 'AnonPages: 124072 kB' 'Mapped: 48912 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 141688 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77276 kB' 'KernelStack: 6304 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 366644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54936 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.926 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.926 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.927 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.927 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.927 20:02:30 -- setup/common.sh@33 -- # echo 1024 00:10:47.927 20:02:30 -- setup/common.sh@33 -- # return 0 00:10:47.927 20:02:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:47.927 20:02:30 -- setup/hugepages.sh@112 -- # get_nodes 00:10:47.927 20:02:30 -- setup/hugepages.sh@27 -- # local node 00:10:47.927 20:02:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:47.927 20:02:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:47.928 20:02:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:47.928 20:02:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:47.928 20:02:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:47.928 20:02:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:47.928 20:02:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:47.928 20:02:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:47.928 20:02:30 -- setup/common.sh@18 -- # local node=0 00:10:47.928 20:02:30 -- setup/common.sh@19 -- # local var val 00:10:47.928 20:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:10:47.928 20:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.928 20:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:47.928 20:02:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:47.928 20:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.928 20:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7662500 kB' 'MemUsed: 4579476 kB' 'SwapCached: 0 kB' 'Active: 850788 kB' 'Inactive: 1290784 kB' 'Active(anon): 132944 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1188 kB' 'Writeback: 0 kB' 'FilePages: 2019092 kB' 'Mapped: 48912 kB' 'AnonPages: 124072 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64412 kB' 'Slab: 141688 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.928 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.928 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.929 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.929 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.929 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.929 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.929 20:02:30 -- setup/common.sh@32 -- # continue 00:10:47.929 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.929 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.929 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.929 20:02:30 -- setup/common.sh@33 -- # echo 0 00:10:47.929 20:02:30 -- setup/common.sh@33 -- # return 0 00:10:47.929 20:02:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:47.929 20:02:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:47.929 20:02:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:47.929 20:02:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:47.929 20:02:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:47.929 node0=1024 expecting 1024 00:10:47.929 20:02:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:47.929 00:10:47.929 real 0m0.707s 00:10:47.929 user 0m0.358s 00:10:47.929 sys 0m0.393s 00:10:47.929 20:02:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:47.929 20:02:30 -- common/autotest_common.sh@10 -- # set +x 00:10:47.929 ************************************ 00:10:47.929 END TEST even_2G_alloc 00:10:47.929 ************************************ 00:10:47.929 20:02:30 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:10:47.929 20:02:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:47.929 20:02:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.929 20:02:30 -- common/autotest_common.sh@10 -- # set +x 00:10:48.189 ************************************ 00:10:48.189 START TEST odd_alloc 00:10:48.189 ************************************ 00:10:48.189 20:02:30 -- common/autotest_common.sh@1111 -- # odd_alloc 00:10:48.189 20:02:30 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:10:48.189 20:02:30 -- setup/hugepages.sh@49 -- # local size=2098176 00:10:48.189 20:02:30 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:10:48.189 20:02:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:48.189 20:02:30 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:10:48.189 20:02:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:10:48.189 20:02:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:48.189 20:02:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:10:48.189 20:02:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:10:48.189 20:02:30 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:48.189 20:02:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:48.189 20:02:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:48.189 20:02:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:48.189 20:02:30 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:10:48.189 20:02:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:48.189 20:02:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:10:48.189 20:02:30 -- setup/hugepages.sh@83 -- # : 0 00:10:48.189 20:02:30 -- setup/hugepages.sh@84 -- # : 0 00:10:48.189 20:02:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:48.189 20:02:30 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:10:48.189 20:02:30 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:10:48.189 20:02:30 -- setup/hugepages.sh@160 -- # setup output 00:10:48.189 20:02:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:48.189 20:02:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:48.449 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:48.711 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:48.711 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:48.711 20:02:30 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:10:48.711 20:02:30 -- setup/hugepages.sh@89 -- # local node 00:10:48.711 20:02:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:10:48.711 20:02:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:10:48.711 20:02:30 -- setup/hugepages.sh@92 -- # local surp 00:10:48.711 20:02:30 -- setup/hugepages.sh@93 -- # local resv 00:10:48.711 20:02:30 -- setup/hugepages.sh@94 -- # local anon 00:10:48.711 20:02:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:48.711 20:02:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:48.711 20:02:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:48.711 20:02:30 -- setup/common.sh@18 -- # local node= 00:10:48.711 20:02:30 -- setup/common.sh@19 -- # local var val 00:10:48.711 20:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:10:48.711 20:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:48.711 20:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:48.711 20:02:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:48.711 20:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:10:48.711 20:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:48.711 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.711 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.711 20:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7674288 kB' 'MemAvailable: 9478192 kB' 'Buffers: 2436 kB' 'Cached: 2016664 kB' 'SwapCached: 0 kB' 'Active: 851056 kB' 'Inactive: 1290792 kB' 'Active(anon): 133212 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1352 kB' 'Writeback: 0 kB' 'AnonPages: 124704 kB' 'Mapped: 49036 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 141856 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77444 kB' 'KernelStack: 6352 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 366644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:48.711 20:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.711 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.711 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.711 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.711 20:02:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.711 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.711 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.711 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.711 20:02:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.711 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.711 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.712 20:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.712 20:02:30 -- setup/common.sh@33 -- # echo 0 00:10:48.712 20:02:30 -- setup/common.sh@33 -- # return 0 00:10:48.712 20:02:30 -- setup/hugepages.sh@97 -- # anon=0 00:10:48.712 20:02:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:48.712 20:02:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:48.712 20:02:30 -- setup/common.sh@18 -- # local node= 00:10:48.712 20:02:30 -- setup/common.sh@19 -- # local var val 00:10:48.712 20:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:10:48.712 20:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:48.712 20:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:48.712 20:02:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:48.712 20:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:10:48.712 20:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.712 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7674288 kB' 'MemAvailable: 9478192 kB' 'Buffers: 2436 kB' 'Cached: 2016664 kB' 'SwapCached: 0 kB' 'Active: 850988 kB' 'Inactive: 1290792 kB' 'Active(anon): 133144 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1352 kB' 'Writeback: 0 kB' 'AnonPages: 124076 kB' 'Mapped: 48920 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 141852 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77440 kB' 'KernelStack: 6336 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 366644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54936 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.713 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.713 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.714 20:02:30 -- setup/common.sh@33 -- # echo 0 00:10:48.714 20:02:30 -- setup/common.sh@33 -- # return 0 00:10:48.714 20:02:30 -- setup/hugepages.sh@99 -- # surp=0 00:10:48.714 20:02:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:48.714 20:02:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:48.714 20:02:30 -- setup/common.sh@18 -- # local node= 00:10:48.714 20:02:30 -- setup/common.sh@19 -- # local var val 00:10:48.714 20:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:10:48.714 20:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:48.714 20:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:48.714 20:02:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:48.714 20:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:10:48.714 20:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7674288 kB' 'MemAvailable: 9478192 kB' 'Buffers: 2436 kB' 'Cached: 2016664 kB' 'SwapCached: 0 kB' 'Active: 850852 kB' 'Inactive: 1290792 kB' 'Active(anon): 133008 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1352 kB' 'Writeback: 0 kB' 'AnonPages: 124200 kB' 'Mapped: 48920 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 141852 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77440 kB' 'KernelStack: 6320 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 366644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54920 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.714 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.714 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.715 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.715 20:02:30 -- setup/common.sh@33 -- # echo 0 00:10:48.715 20:02:30 -- setup/common.sh@33 -- # return 0 00:10:48.715 20:02:30 -- setup/hugepages.sh@100 -- # resv=0 00:10:48.715 20:02:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:10:48.715 nr_hugepages=1025 00:10:48.715 resv_hugepages=0 00:10:48.715 20:02:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:48.715 surplus_hugepages=0 00:10:48.715 20:02:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:48.715 anon_hugepages=0 00:10:48.715 20:02:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:48.715 20:02:30 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:10:48.715 20:02:30 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:10:48.715 20:02:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:48.715 20:02:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:48.715 20:02:30 -- setup/common.sh@18 -- # local node= 00:10:48.715 20:02:30 -- setup/common.sh@19 -- # local var val 00:10:48.715 20:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:10:48.715 20:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:48.715 20:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:48.715 20:02:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:48.715 20:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:10:48.715 20:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.715 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7674540 kB' 'MemAvailable: 9478444 kB' 'Buffers: 2436 kB' 'Cached: 2016664 kB' 'SwapCached: 0 kB' 'Active: 850640 kB' 'Inactive: 1290792 kB' 'Active(anon): 132796 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1352 kB' 'Writeback: 0 kB' 'AnonPages: 124008 kB' 'Mapped: 48920 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 141852 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77440 kB' 'KernelStack: 6352 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 366644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54920 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.716 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.716 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.717 20:02:30 -- setup/common.sh@33 -- # echo 1025 00:10:48.717 20:02:30 -- setup/common.sh@33 -- # return 0 00:10:48.717 20:02:30 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:10:48.717 20:02:30 -- setup/hugepages.sh@112 -- # get_nodes 00:10:48.717 20:02:30 -- setup/hugepages.sh@27 -- # local node 00:10:48.717 20:02:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:48.717 20:02:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:10:48.717 20:02:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:48.717 20:02:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:48.717 20:02:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:48.717 20:02:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:48.717 20:02:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:48.717 20:02:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:48.717 20:02:30 -- setup/common.sh@18 -- # local node=0 00:10:48.717 20:02:30 -- setup/common.sh@19 -- # local var val 00:10:48.717 20:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:10:48.717 20:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:48.717 20:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:48.717 20:02:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:48.717 20:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:10:48.717 20:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7675060 kB' 'MemUsed: 4566916 kB' 'SwapCached: 0 kB' 'Active: 850784 kB' 'Inactive: 1290792 kB' 'Active(anon): 132940 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290792 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1352 kB' 'Writeback: 0 kB' 'FilePages: 2019100 kB' 'Mapped: 48920 kB' 'AnonPages: 123932 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64412 kB' 'Slab: 141848 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.717 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.717 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # continue 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.718 20:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.718 20:02:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.718 20:02:30 -- setup/common.sh@33 -- # echo 0 00:10:48.718 20:02:30 -- setup/common.sh@33 -- # return 0 00:10:48.718 20:02:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:48.718 20:02:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:48.718 20:02:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:48.718 20:02:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:48.718 20:02:30 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:10:48.718 node0=1025 expecting 1025 00:10:48.718 20:02:30 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:10:48.718 00:10:48.718 real 0m0.617s 00:10:48.718 user 0m0.252s 00:10:48.718 sys 0m0.399s 00:10:48.718 20:02:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:48.718 20:02:30 -- common/autotest_common.sh@10 -- # set +x 00:10:48.718 ************************************ 00:10:48.718 END TEST odd_alloc 00:10:48.718 ************************************ 00:10:48.718 20:02:30 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:10:48.718 20:02:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:48.718 20:02:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:48.718 20:02:30 -- common/autotest_common.sh@10 -- # set +x 00:10:48.978 ************************************ 00:10:48.978 START TEST custom_alloc 00:10:48.978 ************************************ 00:10:48.978 20:02:30 -- common/autotest_common.sh@1111 -- # custom_alloc 00:10:48.978 20:02:30 -- setup/hugepages.sh@167 -- # local IFS=, 00:10:48.978 20:02:30 -- setup/hugepages.sh@169 -- # local node 00:10:48.978 20:02:30 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:10:48.978 20:02:30 -- setup/hugepages.sh@170 -- # local nodes_hp 00:10:48.978 20:02:30 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:10:48.978 20:02:30 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:10:48.978 20:02:30 -- setup/hugepages.sh@49 -- # local size=1048576 00:10:48.978 20:02:30 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:10:48.978 20:02:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:48.978 20:02:30 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:10:48.978 20:02:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:10:48.978 20:02:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:48.978 20:02:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:10:48.978 20:02:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:10:48.978 20:02:30 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:48.978 20:02:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:48.978 20:02:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:48.978 20:02:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:48.978 20:02:30 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:10:48.978 20:02:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:48.978 20:02:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:10:48.978 20:02:30 -- setup/hugepages.sh@83 -- # : 0 00:10:48.978 20:02:30 -- setup/hugepages.sh@84 -- # : 0 00:10:48.978 20:02:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:48.978 20:02:30 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:10:48.978 20:02:30 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:10:48.978 20:02:30 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:10:48.978 20:02:30 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:10:48.978 20:02:30 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:10:48.978 20:02:30 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:10:48.978 20:02:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:48.978 20:02:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:10:48.978 20:02:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:10:48.978 20:02:30 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:48.978 20:02:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:48.978 20:02:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:48.978 20:02:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:48.978 20:02:30 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:10:48.978 20:02:30 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:10:48.978 20:02:30 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:10:48.978 20:02:30 -- setup/hugepages.sh@78 -- # return 0 00:10:48.978 20:02:30 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:10:48.978 20:02:30 -- setup/hugepages.sh@187 -- # setup output 00:10:48.978 20:02:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:48.978 20:02:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:49.238 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:49.238 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:49.238 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:49.238 20:02:31 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:10:49.238 20:02:31 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:10:49.238 20:02:31 -- setup/hugepages.sh@89 -- # local node 00:10:49.238 20:02:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:10:49.238 20:02:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:10:49.238 20:02:31 -- setup/hugepages.sh@92 -- # local surp 00:10:49.238 20:02:31 -- setup/hugepages.sh@93 -- # local resv 00:10:49.238 20:02:31 -- setup/hugepages.sh@94 -- # local anon 00:10:49.238 20:02:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:49.500 20:02:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:49.500 20:02:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:49.500 20:02:31 -- setup/common.sh@18 -- # local node= 00:10:49.500 20:02:31 -- setup/common.sh@19 -- # local var val 00:10:49.500 20:02:31 -- setup/common.sh@20 -- # local mem_f mem 00:10:49.500 20:02:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:49.500 20:02:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:49.500 20:02:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:49.500 20:02:31 -- setup/common.sh@28 -- # mapfile -t mem 00:10:49.500 20:02:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:49.500 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.500 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.500 20:02:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8725568 kB' 'MemAvailable: 10529504 kB' 'Buffers: 2436 kB' 'Cached: 2016696 kB' 'SwapCached: 0 kB' 'Active: 850836 kB' 'Inactive: 1290824 kB' 'Active(anon): 132992 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1472 kB' 'Writeback: 0 kB' 'AnonPages: 124108 kB' 'Mapped: 49048 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 141820 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77408 kB' 'KernelStack: 6352 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 366644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:49.500 20:02:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.500 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.500 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.500 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.500 20:02:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.500 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.500 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.500 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.500 20:02:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.500 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.500 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.500 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.501 20:02:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.501 20:02:31 -- setup/common.sh@33 -- # echo 0 00:10:49.501 20:02:31 -- setup/common.sh@33 -- # return 0 00:10:49.501 20:02:31 -- setup/hugepages.sh@97 -- # anon=0 00:10:49.501 20:02:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:49.501 20:02:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:49.501 20:02:31 -- setup/common.sh@18 -- # local node= 00:10:49.501 20:02:31 -- setup/common.sh@19 -- # local var val 00:10:49.501 20:02:31 -- setup/common.sh@20 -- # local mem_f mem 00:10:49.501 20:02:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:49.501 20:02:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:49.501 20:02:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:49.501 20:02:31 -- setup/common.sh@28 -- # mapfile -t mem 00:10:49.501 20:02:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:49.501 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8725820 kB' 'MemAvailable: 10529756 kB' 'Buffers: 2436 kB' 'Cached: 2016696 kB' 'SwapCached: 0 kB' 'Active: 850564 kB' 'Inactive: 1290824 kB' 'Active(anon): 132720 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1472 kB' 'Writeback: 0 kB' 'AnonPages: 124092 kB' 'Mapped: 48932 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 141816 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77404 kB' 'KernelStack: 6304 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 366644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.502 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.502 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.503 20:02:31 -- setup/common.sh@33 -- # echo 0 00:10:49.503 20:02:31 -- setup/common.sh@33 -- # return 0 00:10:49.503 20:02:31 -- setup/hugepages.sh@99 -- # surp=0 00:10:49.503 20:02:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:49.503 20:02:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:49.503 20:02:31 -- setup/common.sh@18 -- # local node= 00:10:49.503 20:02:31 -- setup/common.sh@19 -- # local var val 00:10:49.503 20:02:31 -- setup/common.sh@20 -- # local mem_f mem 00:10:49.503 20:02:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:49.503 20:02:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:49.503 20:02:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:49.503 20:02:31 -- setup/common.sh@28 -- # mapfile -t mem 00:10:49.503 20:02:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8731476 kB' 'MemAvailable: 10535412 kB' 'Buffers: 2436 kB' 'Cached: 2016696 kB' 'SwapCached: 0 kB' 'Active: 850688 kB' 'Inactive: 1290824 kB' 'Active(anon): 132844 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1468 kB' 'Writeback: 0 kB' 'AnonPages: 124212 kB' 'Mapped: 48932 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 141816 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77404 kB' 'KernelStack: 6304 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 366644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.503 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.503 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.504 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.504 20:02:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.504 20:02:31 -- setup/common.sh@33 -- # echo 0 00:10:49.504 20:02:31 -- setup/common.sh@33 -- # return 0 00:10:49.504 20:02:31 -- setup/hugepages.sh@100 -- # resv=0 00:10:49.504 nr_hugepages=512 00:10:49.504 20:02:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:10:49.504 resv_hugepages=0 00:10:49.504 20:02:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:49.504 surplus_hugepages=0 00:10:49.504 20:02:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:49.504 anon_hugepages=0 00:10:49.504 20:02:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:49.504 20:02:31 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:10:49.504 20:02:31 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:10:49.505 20:02:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:49.505 20:02:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:49.505 20:02:31 -- setup/common.sh@18 -- # local node= 00:10:49.505 20:02:31 -- setup/common.sh@19 -- # local var val 00:10:49.505 20:02:31 -- setup/common.sh@20 -- # local mem_f mem 00:10:49.505 20:02:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:49.505 20:02:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:49.505 20:02:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:49.505 20:02:31 -- setup/common.sh@28 -- # mapfile -t mem 00:10:49.505 20:02:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:49.505 20:02:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8731476 kB' 'MemAvailable: 10535412 kB' 'Buffers: 2436 kB' 'Cached: 2016696 kB' 'SwapCached: 0 kB' 'Active: 850852 kB' 'Inactive: 1290824 kB' 'Active(anon): 133008 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1468 kB' 'Writeback: 0 kB' 'AnonPages: 124112 kB' 'Mapped: 48932 kB' 'Shmem: 10464 kB' 'KReclaimable: 64412 kB' 'Slab: 141816 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77404 kB' 'KernelStack: 6288 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 366644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.505 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.505 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.506 20:02:31 -- setup/common.sh@33 -- # echo 512 00:10:49.506 20:02:31 -- setup/common.sh@33 -- # return 0 00:10:49.506 20:02:31 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:10:49.506 20:02:31 -- setup/hugepages.sh@112 -- # get_nodes 00:10:49.506 20:02:31 -- setup/hugepages.sh@27 -- # local node 00:10:49.506 20:02:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:49.506 20:02:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:10:49.506 20:02:31 -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:49.506 20:02:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:49.506 20:02:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:49.506 20:02:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:49.506 20:02:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:49.506 20:02:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:49.506 20:02:31 -- setup/common.sh@18 -- # local node=0 00:10:49.506 20:02:31 -- setup/common.sh@19 -- # local var val 00:10:49.506 20:02:31 -- setup/common.sh@20 -- # local mem_f mem 00:10:49.506 20:02:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:49.506 20:02:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:49.506 20:02:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:49.506 20:02:31 -- setup/common.sh@28 -- # mapfile -t mem 00:10:49.506 20:02:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8731476 kB' 'MemUsed: 3510500 kB' 'SwapCached: 0 kB' 'Active: 850912 kB' 'Inactive: 1290824 kB' 'Active(anon): 133068 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1468 kB' 'Writeback: 0 kB' 'FilePages: 2019132 kB' 'Mapped: 49004 kB' 'AnonPages: 124176 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64412 kB' 'Slab: 141816 kB' 'SReclaimable: 64412 kB' 'SUnreclaim: 77404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.506 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.506 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # continue 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.507 20:02:31 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.507 20:02:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.507 20:02:31 -- setup/common.sh@33 -- # echo 0 00:10:49.507 20:02:31 -- setup/common.sh@33 -- # return 0 00:10:49.507 20:02:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:49.507 20:02:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:49.507 20:02:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:49.507 20:02:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:49.507 node0=512 expecting 512 00:10:49.507 20:02:31 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:10:49.507 20:02:31 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:10:49.507 00:10:49.507 real 0m0.644s 00:10:49.507 user 0m0.309s 00:10:49.507 sys 0m0.377s 00:10:49.507 20:02:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:49.507 20:02:31 -- common/autotest_common.sh@10 -- # set +x 00:10:49.507 ************************************ 00:10:49.507 END TEST custom_alloc 00:10:49.507 ************************************ 00:10:49.507 20:02:31 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:10:49.507 20:02:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:49.507 20:02:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:49.507 20:02:31 -- common/autotest_common.sh@10 -- # set +x 00:10:49.767 ************************************ 00:10:49.767 START TEST no_shrink_alloc 00:10:49.767 ************************************ 00:10:49.767 20:02:31 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:10:49.767 20:02:31 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:10:49.767 20:02:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:10:49.767 20:02:31 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:10:49.767 20:02:31 -- setup/hugepages.sh@51 -- # shift 00:10:49.767 20:02:31 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:10:49.767 20:02:31 -- setup/hugepages.sh@52 -- # local node_ids 00:10:49.767 20:02:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:49.767 20:02:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:10:49.767 20:02:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:10:49.767 20:02:31 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:10:49.767 20:02:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:10:49.767 20:02:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:49.767 20:02:31 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:49.767 20:02:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:49.767 20:02:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:49.767 20:02:31 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:10:49.767 20:02:31 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:10:49.767 20:02:31 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:10:49.767 20:02:31 -- setup/hugepages.sh@73 -- # return 0 00:10:49.767 20:02:31 -- setup/hugepages.sh@198 -- # setup output 00:10:49.767 20:02:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:49.767 20:02:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:50.026 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:50.026 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:50.026 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:50.286 20:02:32 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:10:50.286 20:02:32 -- setup/hugepages.sh@89 -- # local node 00:10:50.286 20:02:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:10:50.286 20:02:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:10:50.286 20:02:32 -- setup/hugepages.sh@92 -- # local surp 00:10:50.286 20:02:32 -- setup/hugepages.sh@93 -- # local resv 00:10:50.286 20:02:32 -- setup/hugepages.sh@94 -- # local anon 00:10:50.286 20:02:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:50.286 20:02:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:50.286 20:02:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:50.286 20:02:32 -- setup/common.sh@18 -- # local node= 00:10:50.286 20:02:32 -- setup/common.sh@19 -- # local var val 00:10:50.286 20:02:32 -- setup/common.sh@20 -- # local mem_f mem 00:10:50.286 20:02:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:50.286 20:02:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:50.286 20:02:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:50.286 20:02:32 -- setup/common.sh@28 -- # mapfile -t mem 00:10:50.286 20:02:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7684120 kB' 'MemAvailable: 9488052 kB' 'Buffers: 2436 kB' 'Cached: 2016700 kB' 'SwapCached: 0 kB' 'Active: 846656 kB' 'Inactive: 1290828 kB' 'Active(anon): 128812 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1608 kB' 'Writeback: 0 kB' 'AnonPages: 120176 kB' 'Mapped: 48636 kB' 'Shmem: 10464 kB' 'KReclaimable: 64400 kB' 'Slab: 141664 kB' 'SReclaimable: 64400 kB' 'SUnreclaim: 77264 kB' 'KernelStack: 6216 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54872 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.286 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.286 20:02:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.287 20:02:32 -- setup/common.sh@33 -- # echo 0 00:10:50.287 20:02:32 -- setup/common.sh@33 -- # return 0 00:10:50.287 20:02:32 -- setup/hugepages.sh@97 -- # anon=0 00:10:50.287 20:02:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:50.287 20:02:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:50.287 20:02:32 -- setup/common.sh@18 -- # local node= 00:10:50.287 20:02:32 -- setup/common.sh@19 -- # local var val 00:10:50.287 20:02:32 -- setup/common.sh@20 -- # local mem_f mem 00:10:50.287 20:02:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:50.287 20:02:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:50.287 20:02:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:50.287 20:02:32 -- setup/common.sh@28 -- # mapfile -t mem 00:10:50.287 20:02:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:50.287 20:02:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7684376 kB' 'MemAvailable: 9488308 kB' 'Buffers: 2436 kB' 'Cached: 2016700 kB' 'SwapCached: 0 kB' 'Active: 846008 kB' 'Inactive: 1290828 kB' 'Active(anon): 128164 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1608 kB' 'Writeback: 0 kB' 'AnonPages: 119512 kB' 'Mapped: 48448 kB' 'Shmem: 10464 kB' 'KReclaimable: 64400 kB' 'Slab: 141656 kB' 'SReclaimable: 64400 kB' 'SUnreclaim: 77256 kB' 'KernelStack: 6212 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54856 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.287 20:02:32 -- setup/common.sh@33 -- # echo 0 00:10:50.287 20:02:32 -- setup/common.sh@33 -- # return 0 00:10:50.287 20:02:32 -- setup/hugepages.sh@99 -- # surp=0 00:10:50.287 20:02:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:50.287 20:02:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:50.287 20:02:32 -- setup/common.sh@18 -- # local node= 00:10:50.287 20:02:32 -- setup/common.sh@19 -- # local var val 00:10:50.287 20:02:32 -- setup/common.sh@20 -- # local mem_f mem 00:10:50.287 20:02:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:50.287 20:02:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:50.287 20:02:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:50.287 20:02:32 -- setup/common.sh@28 -- # mapfile -t mem 00:10:50.287 20:02:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7684376 kB' 'MemAvailable: 9488308 kB' 'Buffers: 2436 kB' 'Cached: 2016700 kB' 'SwapCached: 0 kB' 'Active: 846116 kB' 'Inactive: 1290828 kB' 'Active(anon): 128272 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1608 kB' 'Writeback: 0 kB' 'AnonPages: 119616 kB' 'Mapped: 48448 kB' 'Shmem: 10464 kB' 'KReclaimable: 64400 kB' 'Slab: 141656 kB' 'SReclaimable: 64400 kB' 'SUnreclaim: 77256 kB' 'KernelStack: 6196 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54840 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.287 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.287 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.288 20:02:32 -- setup/common.sh@33 -- # echo 0 00:10:50.288 20:02:32 -- setup/common.sh@33 -- # return 0 00:10:50.288 20:02:32 -- setup/hugepages.sh@100 -- # resv=0 00:10:50.288 nr_hugepages=1024 00:10:50.288 20:02:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:50.288 resv_hugepages=0 00:10:50.288 20:02:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:50.288 surplus_hugepages=0 00:10:50.288 20:02:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:50.288 anon_hugepages=0 00:10:50.288 20:02:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:50.288 20:02:32 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:50.288 20:02:32 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:50.288 20:02:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:50.288 20:02:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:50.288 20:02:32 -- setup/common.sh@18 -- # local node= 00:10:50.288 20:02:32 -- setup/common.sh@19 -- # local var val 00:10:50.288 20:02:32 -- setup/common.sh@20 -- # local mem_f mem 00:10:50.288 20:02:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:50.288 20:02:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:50.288 20:02:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:50.288 20:02:32 -- setup/common.sh@28 -- # mapfile -t mem 00:10:50.288 20:02:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7684376 kB' 'MemAvailable: 9488308 kB' 'Buffers: 2436 kB' 'Cached: 2016700 kB' 'SwapCached: 0 kB' 'Active: 846012 kB' 'Inactive: 1290828 kB' 'Active(anon): 128168 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1608 kB' 'Writeback: 0 kB' 'AnonPages: 119540 kB' 'Mapped: 48448 kB' 'Shmem: 10464 kB' 'KReclaimable: 64400 kB' 'Slab: 141656 kB' 'SReclaimable: 64400 kB' 'SUnreclaim: 77256 kB' 'KernelStack: 6212 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54840 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.288 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.288 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.289 20:02:32 -- setup/common.sh@33 -- # echo 1024 00:10:50.289 20:02:32 -- setup/common.sh@33 -- # return 0 00:10:50.289 20:02:32 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:50.289 20:02:32 -- setup/hugepages.sh@112 -- # get_nodes 00:10:50.289 20:02:32 -- setup/hugepages.sh@27 -- # local node 00:10:50.289 20:02:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:50.289 20:02:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:50.289 20:02:32 -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:50.289 20:02:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:50.289 20:02:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:50.289 20:02:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:50.289 20:02:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:50.289 20:02:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:50.289 20:02:32 -- setup/common.sh@18 -- # local node=0 00:10:50.289 20:02:32 -- setup/common.sh@19 -- # local var val 00:10:50.289 20:02:32 -- setup/common.sh@20 -- # local mem_f mem 00:10:50.289 20:02:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:50.289 20:02:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:50.289 20:02:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:50.289 20:02:32 -- setup/common.sh@28 -- # mapfile -t mem 00:10:50.289 20:02:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7684376 kB' 'MemUsed: 4557600 kB' 'SwapCached: 0 kB' 'Active: 845960 kB' 'Inactive: 1290828 kB' 'Active(anon): 128116 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1608 kB' 'Writeback: 0 kB' 'FilePages: 2019136 kB' 'Mapped: 48448 kB' 'AnonPages: 119468 kB' 'Shmem: 10464 kB' 'KernelStack: 6180 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64400 kB' 'Slab: 141656 kB' 'SReclaimable: 64400 kB' 'SUnreclaim: 77256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.289 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.289 20:02:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.289 20:02:32 -- setup/common.sh@33 -- # echo 0 00:10:50.289 20:02:32 -- setup/common.sh@33 -- # return 0 00:10:50.289 20:02:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:50.289 20:02:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:50.289 20:02:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:50.289 20:02:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:50.289 20:02:32 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:50.289 node0=1024 expecting 1024 00:10:50.289 20:02:32 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:50.289 20:02:32 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:10:50.289 20:02:32 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:10:50.289 20:02:32 -- setup/hugepages.sh@202 -- # setup output 00:10:50.289 20:02:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:50.289 20:02:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:50.862 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:50.862 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:50.862 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:50.862 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:10:50.862 20:02:32 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:10:50.862 20:02:32 -- setup/hugepages.sh@89 -- # local node 00:10:50.862 20:02:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:10:50.862 20:02:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:10:50.862 20:02:32 -- setup/hugepages.sh@92 -- # local surp 00:10:50.862 20:02:32 -- setup/hugepages.sh@93 -- # local resv 00:10:50.862 20:02:32 -- setup/hugepages.sh@94 -- # local anon 00:10:50.862 20:02:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:50.862 20:02:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:50.862 20:02:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:50.862 20:02:32 -- setup/common.sh@18 -- # local node= 00:10:50.862 20:02:32 -- setup/common.sh@19 -- # local var val 00:10:50.862 20:02:32 -- setup/common.sh@20 -- # local mem_f mem 00:10:50.862 20:02:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:50.862 20:02:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:50.862 20:02:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:50.862 20:02:32 -- setup/common.sh@28 -- # mapfile -t mem 00:10:50.862 20:02:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7685260 kB' 'MemAvailable: 9489192 kB' 'Buffers: 2436 kB' 'Cached: 2016700 kB' 'SwapCached: 0 kB' 'Active: 846336 kB' 'Inactive: 1290828 kB' 'Active(anon): 128492 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 119656 kB' 'Mapped: 48368 kB' 'Shmem: 10464 kB' 'KReclaimable: 64400 kB' 'Slab: 141620 kB' 'SReclaimable: 64400 kB' 'SUnreclaim: 77220 kB' 'KernelStack: 6212 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54872 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.862 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.862 20:02:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.863 20:02:32 -- setup/common.sh@33 -- # echo 0 00:10:50.863 20:02:32 -- setup/common.sh@33 -- # return 0 00:10:50.863 20:02:32 -- setup/hugepages.sh@97 -- # anon=0 00:10:50.863 20:02:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:50.863 20:02:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:50.863 20:02:32 -- setup/common.sh@18 -- # local node= 00:10:50.863 20:02:32 -- setup/common.sh@19 -- # local var val 00:10:50.863 20:02:32 -- setup/common.sh@20 -- # local mem_f mem 00:10:50.863 20:02:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:50.863 20:02:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:50.863 20:02:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:50.863 20:02:32 -- setup/common.sh@28 -- # mapfile -t mem 00:10:50.863 20:02:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:50.863 20:02:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7685008 kB' 'MemAvailable: 9488940 kB' 'Buffers: 2436 kB' 'Cached: 2016700 kB' 'SwapCached: 0 kB' 'Active: 845836 kB' 'Inactive: 1290828 kB' 'Active(anon): 127992 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 119384 kB' 'Mapped: 48224 kB' 'Shmem: 10464 kB' 'KReclaimable: 64400 kB' 'Slab: 141620 kB' 'SReclaimable: 64400 kB' 'SUnreclaim: 77220 kB' 'KernelStack: 6192 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54856 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.863 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.863 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # continue 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.864 20:02:32 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.864 20:02:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.864 20:02:32 -- setup/common.sh@33 -- # echo 0 00:10:50.864 20:02:32 -- setup/common.sh@33 -- # return 0 00:10:50.864 20:02:32 -- setup/hugepages.sh@99 -- # surp=0 00:10:50.864 20:02:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:50.864 20:02:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:50.864 20:02:32 -- setup/common.sh@18 -- # local node= 00:10:50.864 20:02:32 -- setup/common.sh@19 -- # local var val 00:10:50.864 20:02:32 -- setup/common.sh@20 -- # local mem_f mem 00:10:50.864 20:02:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:50.865 20:02:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:50.865 20:02:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:50.865 20:02:32 -- setup/common.sh@28 -- # mapfile -t mem 00:10:50.865 20:02:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7685008 kB' 'MemAvailable: 9488940 kB' 'Buffers: 2436 kB' 'Cached: 2016700 kB' 'SwapCached: 0 kB' 'Active: 846016 kB' 'Inactive: 1290828 kB' 'Active(anon): 128172 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 119564 kB' 'Mapped: 48224 kB' 'Shmem: 10464 kB' 'KReclaimable: 64400 kB' 'Slab: 141620 kB' 'SReclaimable: 64400 kB' 'SUnreclaim: 77220 kB' 'KernelStack: 6176 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54856 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.865 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.865 20:02:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.866 20:02:33 -- setup/common.sh@33 -- # echo 0 00:10:50.866 20:02:33 -- setup/common.sh@33 -- # return 0 00:10:50.866 20:02:33 -- setup/hugepages.sh@100 -- # resv=0 00:10:50.866 20:02:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:50.866 nr_hugepages=1024 00:10:50.866 resv_hugepages=0 00:10:50.866 20:02:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:50.866 surplus_hugepages=0 00:10:50.866 20:02:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:50.866 anon_hugepages=0 00:10:50.866 20:02:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:50.866 20:02:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:50.866 20:02:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:50.866 20:02:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:50.866 20:02:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:50.866 20:02:33 -- setup/common.sh@18 -- # local node= 00:10:50.866 20:02:33 -- setup/common.sh@19 -- # local var val 00:10:50.866 20:02:33 -- setup/common.sh@20 -- # local mem_f mem 00:10:50.866 20:02:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:50.866 20:02:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:50.866 20:02:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:50.866 20:02:33 -- setup/common.sh@28 -- # mapfile -t mem 00:10:50.866 20:02:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:50.866 20:02:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7685008 kB' 'MemAvailable: 9488940 kB' 'Buffers: 2436 kB' 'Cached: 2016700 kB' 'SwapCached: 0 kB' 'Active: 845836 kB' 'Inactive: 1290828 kB' 'Active(anon): 127992 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 119384 kB' 'Mapped: 48224 kB' 'Shmem: 10464 kB' 'KReclaimable: 64400 kB' 'Slab: 141620 kB' 'SReclaimable: 64400 kB' 'SUnreclaim: 77220 kB' 'KernelStack: 6192 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54856 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.866 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.866 20:02:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.867 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.867 20:02:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.867 20:02:33 -- setup/common.sh@33 -- # echo 1024 00:10:50.868 20:02:33 -- setup/common.sh@33 -- # return 0 00:10:50.868 20:02:33 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:50.868 20:02:33 -- setup/hugepages.sh@112 -- # get_nodes 00:10:50.868 20:02:33 -- setup/hugepages.sh@27 -- # local node 00:10:50.868 20:02:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:50.868 20:02:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:50.868 20:02:33 -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:50.868 20:02:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:50.868 20:02:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:50.868 20:02:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:50.868 20:02:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:50.868 20:02:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:50.868 20:02:33 -- setup/common.sh@18 -- # local node=0 00:10:50.868 20:02:33 -- setup/common.sh@19 -- # local var val 00:10:50.868 20:02:33 -- setup/common.sh@20 -- # local mem_f mem 00:10:50.868 20:02:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:50.868 20:02:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:50.868 20:02:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:50.868 20:02:33 -- setup/common.sh@28 -- # mapfile -t mem 00:10:50.868 20:02:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7685008 kB' 'MemUsed: 4556968 kB' 'SwapCached: 0 kB' 'Active: 845988 kB' 'Inactive: 1290828 kB' 'Active(anon): 128144 kB' 'Inactive(anon): 0 kB' 'Active(file): 717844 kB' 'Inactive(file): 1290828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 2019136 kB' 'Mapped: 48224 kB' 'AnonPages: 119268 kB' 'Shmem: 10464 kB' 'KernelStack: 6192 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64400 kB' 'Slab: 141620 kB' 'SReclaimable: 64400 kB' 'SUnreclaim: 77220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.868 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.868 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.869 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.869 20:02:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.869 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.869 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.869 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.869 20:02:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.869 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.869 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.869 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.869 20:02:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.869 20:02:33 -- setup/common.sh@32 -- # continue 00:10:50.869 20:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.869 20:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.869 20:02:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.869 20:02:33 -- setup/common.sh@33 -- # echo 0 00:10:50.869 20:02:33 -- setup/common.sh@33 -- # return 0 00:10:50.869 20:02:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:50.869 20:02:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:50.869 20:02:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:50.869 20:02:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:50.869 node0=1024 expecting 1024 00:10:50.869 20:02:33 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:50.869 20:02:33 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:50.869 00:10:50.869 real 0m1.323s 00:10:50.869 user 0m0.613s 00:10:50.869 sys 0m0.790s 00:10:50.869 20:02:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:50.869 20:02:33 -- common/autotest_common.sh@10 -- # set +x 00:10:50.869 ************************************ 00:10:50.869 END TEST no_shrink_alloc 00:10:50.869 ************************************ 00:10:51.129 20:02:33 -- setup/hugepages.sh@217 -- # clear_hp 00:10:51.129 20:02:33 -- setup/hugepages.sh@37 -- # local node hp 00:10:51.129 20:02:33 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:10:51.129 20:02:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:51.129 20:02:33 -- setup/hugepages.sh@41 -- # echo 0 00:10:51.129 20:02:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:51.129 20:02:33 -- setup/hugepages.sh@41 -- # echo 0 00:10:51.129 20:02:33 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:10:51.129 20:02:33 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:10:51.129 00:10:51.129 real 0m5.882s 00:10:51.129 user 0m2.630s 00:10:51.129 sys 0m3.401s 00:10:51.129 20:02:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:51.129 20:02:33 -- common/autotest_common.sh@10 -- # set +x 00:10:51.129 ************************************ 00:10:51.129 END TEST hugepages 00:10:51.129 ************************************ 00:10:51.129 20:02:33 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:10:51.129 20:02:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:51.129 20:02:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:51.129 20:02:33 -- common/autotest_common.sh@10 -- # set +x 00:10:51.129 ************************************ 00:10:51.129 START TEST driver 00:10:51.129 ************************************ 00:10:51.129 20:02:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:10:51.389 * Looking for test storage... 00:10:51.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:51.389 20:02:33 -- setup/driver.sh@68 -- # setup reset 00:10:51.389 20:02:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:51.389 20:02:33 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:51.957 20:02:34 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:10:51.957 20:02:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:51.957 20:02:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:51.957 20:02:34 -- common/autotest_common.sh@10 -- # set +x 00:10:52.216 ************************************ 00:10:52.216 START TEST guess_driver 00:10:52.216 ************************************ 00:10:52.216 20:02:34 -- common/autotest_common.sh@1111 -- # guess_driver 00:10:52.216 20:02:34 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:10:52.216 20:02:34 -- setup/driver.sh@47 -- # local fail=0 00:10:52.216 20:02:34 -- setup/driver.sh@49 -- # pick_driver 00:10:52.216 20:02:34 -- setup/driver.sh@36 -- # vfio 00:10:52.216 20:02:34 -- setup/driver.sh@21 -- # local iommu_grups 00:10:52.216 20:02:34 -- setup/driver.sh@22 -- # local unsafe_vfio 00:10:52.216 20:02:34 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:10:52.216 20:02:34 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:10:52.216 20:02:34 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:10:52.216 20:02:34 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:10:52.216 20:02:34 -- setup/driver.sh@32 -- # return 1 00:10:52.216 20:02:34 -- setup/driver.sh@38 -- # uio 00:10:52.216 20:02:34 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:10:52.216 20:02:34 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:10:52.216 20:02:34 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:10:52.216 20:02:34 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:10:52.216 20:02:34 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:10:52.216 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:10:52.216 20:02:34 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:10:52.216 20:02:34 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:10:52.216 20:02:34 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:10:52.216 Looking for driver=uio_pci_generic 00:10:52.216 20:02:34 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:10:52.216 20:02:34 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.216 20:02:34 -- setup/driver.sh@45 -- # setup output config 00:10:52.216 20:02:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:52.216 20:02:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:52.784 20:02:35 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:10:52.784 20:02:35 -- setup/driver.sh@58 -- # continue 00:10:52.784 20:02:35 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:53.043 20:02:35 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:53.043 20:02:35 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:10:53.043 20:02:35 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:53.043 20:02:35 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:53.043 20:02:35 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:10:53.043 20:02:35 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:53.043 20:02:35 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:10:53.043 20:02:35 -- setup/driver.sh@65 -- # setup reset 00:10:53.043 20:02:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:53.043 20:02:35 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:54.067 00:10:54.067 real 0m1.721s 00:10:54.067 user 0m0.603s 00:10:54.067 sys 0m1.149s 00:10:54.067 20:02:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:54.067 20:02:35 -- common/autotest_common.sh@10 -- # set +x 00:10:54.067 ************************************ 00:10:54.067 END TEST guess_driver 00:10:54.067 ************************************ 00:10:54.067 ************************************ 00:10:54.067 END TEST driver 00:10:54.067 ************************************ 00:10:54.067 00:10:54.067 real 0m2.710s 00:10:54.067 user 0m0.948s 00:10:54.067 sys 0m1.873s 00:10:54.067 20:02:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:54.067 20:02:36 -- common/autotest_common.sh@10 -- # set +x 00:10:54.067 20:02:36 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:10:54.067 20:02:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:54.067 20:02:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:54.067 20:02:36 -- common/autotest_common.sh@10 -- # set +x 00:10:54.067 ************************************ 00:10:54.067 START TEST devices 00:10:54.067 ************************************ 00:10:54.067 20:02:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:10:54.067 * Looking for test storage... 00:10:54.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:54.067 20:02:36 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:10:54.067 20:02:36 -- setup/devices.sh@192 -- # setup reset 00:10:54.067 20:02:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:54.067 20:02:36 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:55.005 20:02:37 -- setup/devices.sh@194 -- # get_zoned_devs 00:10:55.005 20:02:37 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:10:55.005 20:02:37 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:10:55.005 20:02:37 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:10:55.005 20:02:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:55.005 20:02:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:10:55.005 20:02:37 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:10:55.005 20:02:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:55.005 20:02:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:55.005 20:02:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:55.005 20:02:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:10:55.005 20:02:37 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:10:55.005 20:02:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:10:55.005 20:02:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:55.005 20:02:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:55.005 20:02:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:10:55.005 20:02:37 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:10:55.005 20:02:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:10:55.005 20:02:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:55.005 20:02:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:55.005 20:02:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:10:55.005 20:02:37 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:10:55.005 20:02:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:55.005 20:02:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:55.005 20:02:37 -- setup/devices.sh@196 -- # blocks=() 00:10:55.005 20:02:37 -- setup/devices.sh@196 -- # declare -a blocks 00:10:55.005 20:02:37 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:10:55.005 20:02:37 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:10:55.005 20:02:37 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:10:55.005 20:02:37 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:55.005 20:02:37 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:10:55.005 20:02:37 -- setup/devices.sh@201 -- # ctrl=nvme0 00:10:55.005 20:02:37 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:10:55.005 20:02:37 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:10:55.006 20:02:37 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:10:55.006 20:02:37 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:10:55.006 20:02:37 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:10:55.006 No valid GPT data, bailing 00:10:55.006 20:02:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:55.006 20:02:37 -- scripts/common.sh@391 -- # pt= 00:10:55.006 20:02:37 -- scripts/common.sh@392 -- # return 1 00:10:55.006 20:02:37 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:10:55.006 20:02:37 -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:55.006 20:02:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:55.006 20:02:37 -- setup/common.sh@80 -- # echo 4294967296 00:10:55.006 20:02:37 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:10:55.006 20:02:37 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:55.006 20:02:37 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:10:55.006 20:02:37 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:55.006 20:02:37 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:10:55.006 20:02:37 -- setup/devices.sh@201 -- # ctrl=nvme0 00:10:55.006 20:02:37 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:10:55.006 20:02:37 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:10:55.006 20:02:37 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:10:55.006 20:02:37 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:10:55.006 20:02:37 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:10:55.278 No valid GPT data, bailing 00:10:55.278 20:02:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:10:55.278 20:02:37 -- scripts/common.sh@391 -- # pt= 00:10:55.278 20:02:37 -- scripts/common.sh@392 -- # return 1 00:10:55.278 20:02:37 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:10:55.278 20:02:37 -- setup/common.sh@76 -- # local dev=nvme0n2 00:10:55.278 20:02:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:10:55.278 20:02:37 -- setup/common.sh@80 -- # echo 4294967296 00:10:55.278 20:02:37 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:10:55.278 20:02:37 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:55.278 20:02:37 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:10:55.278 20:02:37 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:55.278 20:02:37 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:10:55.278 20:02:37 -- setup/devices.sh@201 -- # ctrl=nvme0 00:10:55.278 20:02:37 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:10:55.278 20:02:37 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:10:55.278 20:02:37 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:10:55.278 20:02:37 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:10:55.278 20:02:37 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:10:55.278 No valid GPT data, bailing 00:10:55.278 20:02:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:10:55.278 20:02:37 -- scripts/common.sh@391 -- # pt= 00:10:55.278 20:02:37 -- scripts/common.sh@392 -- # return 1 00:10:55.278 20:02:37 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:10:55.278 20:02:37 -- setup/common.sh@76 -- # local dev=nvme0n3 00:10:55.278 20:02:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:10:55.278 20:02:37 -- setup/common.sh@80 -- # echo 4294967296 00:10:55.278 20:02:37 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:10:55.278 20:02:37 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:55.278 20:02:37 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:10:55.279 20:02:37 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:55.279 20:02:37 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:10:55.279 20:02:37 -- setup/devices.sh@201 -- # ctrl=nvme1 00:10:55.279 20:02:37 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:10:55.279 20:02:37 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:10:55.279 20:02:37 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:10:55.279 20:02:37 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:10:55.279 20:02:37 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:10:55.279 No valid GPT data, bailing 00:10:55.279 20:02:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:10:55.279 20:02:37 -- scripts/common.sh@391 -- # pt= 00:10:55.279 20:02:37 -- scripts/common.sh@392 -- # return 1 00:10:55.279 20:02:37 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:10:55.279 20:02:37 -- setup/common.sh@76 -- # local dev=nvme1n1 00:10:55.279 20:02:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:10:55.279 20:02:37 -- setup/common.sh@80 -- # echo 5368709120 00:10:55.279 20:02:37 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:10:55.279 20:02:37 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:55.279 20:02:37 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:10:55.279 20:02:37 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:10:55.279 20:02:37 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:10:55.279 20:02:37 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:10:55.279 20:02:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:55.279 20:02:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:55.279 20:02:37 -- common/autotest_common.sh@10 -- # set +x 00:10:55.538 ************************************ 00:10:55.538 START TEST nvme_mount 00:10:55.538 ************************************ 00:10:55.538 20:02:37 -- common/autotest_common.sh@1111 -- # nvme_mount 00:10:55.538 20:02:37 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:10:55.538 20:02:37 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:10:55.538 20:02:37 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:55.538 20:02:37 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:55.538 20:02:37 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:10:55.538 20:02:37 -- setup/common.sh@39 -- # local disk=nvme0n1 00:10:55.538 20:02:37 -- setup/common.sh@40 -- # local part_no=1 00:10:55.538 20:02:37 -- setup/common.sh@41 -- # local size=1073741824 00:10:55.538 20:02:37 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:10:55.538 20:02:37 -- setup/common.sh@44 -- # parts=() 00:10:55.538 20:02:37 -- setup/common.sh@44 -- # local parts 00:10:55.538 20:02:37 -- setup/common.sh@46 -- # (( part = 1 )) 00:10:55.538 20:02:37 -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:55.538 20:02:37 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:10:55.538 20:02:37 -- setup/common.sh@46 -- # (( part++ )) 00:10:55.538 20:02:37 -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:55.538 20:02:37 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:10:55.538 20:02:37 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:10:55.538 20:02:37 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:10:56.475 Creating new GPT entries in memory. 00:10:56.475 GPT data structures destroyed! You may now partition the disk using fdisk or 00:10:56.475 other utilities. 00:10:56.475 20:02:38 -- setup/common.sh@57 -- # (( part = 1 )) 00:10:56.475 20:02:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:10:56.475 20:02:38 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:10:56.475 20:02:38 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:10:56.475 20:02:38 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:10:57.855 Creating new GPT entries in memory. 00:10:57.855 The operation has completed successfully. 00:10:57.855 20:02:39 -- setup/common.sh@57 -- # (( part++ )) 00:10:57.855 20:02:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:10:57.855 20:02:39 -- setup/common.sh@62 -- # wait 56637 00:10:57.855 20:02:39 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:57.855 20:02:39 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:10:57.855 20:02:39 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:57.855 20:02:39 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:10:57.855 20:02:39 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:10:57.855 20:02:39 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:57.855 20:02:39 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:57.855 20:02:39 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:10:57.855 20:02:39 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:10:57.855 20:02:39 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:57.855 20:02:39 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:57.855 20:02:39 -- setup/devices.sh@53 -- # local found=0 00:10:57.855 20:02:39 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:10:57.855 20:02:39 -- setup/devices.sh@56 -- # : 00:10:57.855 20:02:39 -- setup/devices.sh@59 -- # local pci status 00:10:57.855 20:02:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:57.855 20:02:39 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:10:57.855 20:02:39 -- setup/devices.sh@47 -- # setup output config 00:10:57.855 20:02:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:57.855 20:02:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:57.855 20:02:40 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:57.855 20:02:40 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:10:57.855 20:02:40 -- setup/devices.sh@63 -- # found=1 00:10:57.855 20:02:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:57.855 20:02:40 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:57.855 20:02:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.115 20:02:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.115 20:02:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.115 20:02:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.115 20:02:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.374 20:02:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:10:58.374 20:02:40 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:10:58.374 20:02:40 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:58.374 20:02:40 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:10:58.374 20:02:40 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:58.374 20:02:40 -- setup/devices.sh@110 -- # cleanup_nvme 00:10:58.374 20:02:40 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:58.374 20:02:40 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:58.374 20:02:40 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:10:58.374 20:02:40 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:10:58.374 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:10:58.374 20:02:40 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:10:58.374 20:02:40 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:10:58.635 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:58.635 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:10:58.635 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:58.635 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:10:58.635 20:02:40 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:10:58.635 20:02:40 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:10:58.635 20:02:40 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:58.635 20:02:40 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:10:58.635 20:02:40 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:10:58.635 20:02:40 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:58.635 20:02:40 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:58.635 20:02:40 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:10:58.635 20:02:40 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:10:58.635 20:02:40 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:58.635 20:02:40 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:58.635 20:02:40 -- setup/devices.sh@53 -- # local found=0 00:10:58.635 20:02:40 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:10:58.635 20:02:40 -- setup/devices.sh@56 -- # : 00:10:58.635 20:02:40 -- setup/devices.sh@59 -- # local pci status 00:10:58.635 20:02:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:10:58.635 20:02:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.635 20:02:40 -- setup/devices.sh@47 -- # setup output config 00:10:58.635 20:02:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:58.635 20:02:40 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:58.895 20:02:41 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.895 20:02:41 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:10:58.895 20:02:41 -- setup/devices.sh@63 -- # found=1 00:10:58.895 20:02:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.895 20:02:41 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.895 20:02:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.154 20:02:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.154 20:02:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.154 20:02:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.154 20:02:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.154 20:02:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:10:59.154 20:02:41 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:10:59.154 20:02:41 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:59.154 20:02:41 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:10:59.154 20:02:41 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:59.154 20:02:41 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:59.154 20:02:41 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:10:59.154 20:02:41 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:10:59.154 20:02:41 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:10:59.154 20:02:41 -- setup/devices.sh@50 -- # local mount_point= 00:10:59.154 20:02:41 -- setup/devices.sh@51 -- # local test_file= 00:10:59.154 20:02:41 -- setup/devices.sh@53 -- # local found=0 00:10:59.154 20:02:41 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:10:59.154 20:02:41 -- setup/devices.sh@59 -- # local pci status 00:10:59.154 20:02:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.154 20:02:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:10:59.154 20:02:41 -- setup/devices.sh@47 -- # setup output config 00:10:59.154 20:02:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:59.154 20:02:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:59.724 20:02:41 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.724 20:02:41 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:10:59.724 20:02:41 -- setup/devices.sh@63 -- # found=1 00:10:59.724 20:02:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.724 20:02:41 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.724 20:02:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.724 20:02:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.724 20:02:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.724 20:02:41 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.724 20:02:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.984 20:02:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:10:59.984 20:02:42 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:10:59.984 20:02:42 -- setup/devices.sh@68 -- # return 0 00:10:59.984 20:02:42 -- setup/devices.sh@128 -- # cleanup_nvme 00:10:59.984 20:02:42 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:59.984 20:02:42 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:10:59.984 20:02:42 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:10:59.984 20:02:42 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:10:59.984 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:10:59.984 00:10:59.984 real 0m4.489s 00:10:59.984 user 0m0.795s 00:10:59.984 sys 0m1.443s 00:10:59.984 20:02:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:59.984 20:02:42 -- common/autotest_common.sh@10 -- # set +x 00:10:59.984 ************************************ 00:10:59.984 END TEST nvme_mount 00:10:59.984 ************************************ 00:10:59.984 20:02:42 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:10:59.984 20:02:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:59.984 20:02:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:59.984 20:02:42 -- common/autotest_common.sh@10 -- # set +x 00:10:59.984 ************************************ 00:10:59.984 START TEST dm_mount 00:10:59.984 ************************************ 00:10:59.984 20:02:42 -- common/autotest_common.sh@1111 -- # dm_mount 00:10:59.984 20:02:42 -- setup/devices.sh@144 -- # pv=nvme0n1 00:10:59.984 20:02:42 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:10:59.984 20:02:42 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:10:59.984 20:02:42 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:10:59.984 20:02:42 -- setup/common.sh@39 -- # local disk=nvme0n1 00:10:59.984 20:02:42 -- setup/common.sh@40 -- # local part_no=2 00:10:59.984 20:02:42 -- setup/common.sh@41 -- # local size=1073741824 00:10:59.984 20:02:42 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:10:59.984 20:02:42 -- setup/common.sh@44 -- # parts=() 00:10:59.984 20:02:42 -- setup/common.sh@44 -- # local parts 00:10:59.984 20:02:42 -- setup/common.sh@46 -- # (( part = 1 )) 00:10:59.984 20:02:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:59.984 20:02:42 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:10:59.984 20:02:42 -- setup/common.sh@46 -- # (( part++ )) 00:10:59.984 20:02:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:59.984 20:02:42 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:10:59.984 20:02:42 -- setup/common.sh@46 -- # (( part++ )) 00:10:59.984 20:02:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:59.984 20:02:42 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:10:59.984 20:02:42 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:10:59.984 20:02:42 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:11:01.363 Creating new GPT entries in memory. 00:11:01.363 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:01.363 other utilities. 00:11:01.363 20:02:43 -- setup/common.sh@57 -- # (( part = 1 )) 00:11:01.363 20:02:43 -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:01.363 20:02:43 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:01.363 20:02:43 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:01.363 20:02:43 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:11:02.301 Creating new GPT entries in memory. 00:11:02.301 The operation has completed successfully. 00:11:02.301 20:02:44 -- setup/common.sh@57 -- # (( part++ )) 00:11:02.301 20:02:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:02.301 20:02:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:02.301 20:02:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:02.301 20:02:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:11:03.239 The operation has completed successfully. 00:11:03.239 20:02:45 -- setup/common.sh@57 -- # (( part++ )) 00:11:03.239 20:02:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:03.239 20:02:45 -- setup/common.sh@62 -- # wait 57086 00:11:03.239 20:02:45 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:11:03.239 20:02:45 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:03.239 20:02:45 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:03.239 20:02:45 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:11:03.239 20:02:45 -- setup/devices.sh@160 -- # for t in {1..5} 00:11:03.239 20:02:45 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:03.239 20:02:45 -- setup/devices.sh@161 -- # break 00:11:03.239 20:02:45 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:03.239 20:02:45 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:11:03.239 20:02:45 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:11:03.239 20:02:45 -- setup/devices.sh@166 -- # dm=dm-0 00:11:03.239 20:02:45 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:11:03.240 20:02:45 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:11:03.240 20:02:45 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:03.240 20:02:45 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:11:03.240 20:02:45 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:03.240 20:02:45 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:03.240 20:02:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:11:03.240 20:02:45 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:03.240 20:02:45 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:03.240 20:02:45 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:03.240 20:02:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:11:03.240 20:02:45 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:03.240 20:02:45 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:03.240 20:02:45 -- setup/devices.sh@53 -- # local found=0 00:11:03.240 20:02:45 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:11:03.240 20:02:45 -- setup/devices.sh@56 -- # : 00:11:03.240 20:02:45 -- setup/devices.sh@59 -- # local pci status 00:11:03.240 20:02:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:03.240 20:02:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:03.240 20:02:45 -- setup/devices.sh@47 -- # setup output config 00:11:03.240 20:02:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:03.240 20:02:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:03.499 20:02:45 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.499 20:02:45 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:11:03.499 20:02:45 -- setup/devices.sh@63 -- # found=1 00:11:03.499 20:02:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:03.499 20:02:45 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.499 20:02:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:03.759 20:02:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.759 20:02:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:03.759 20:02:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.759 20:02:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.020 20:02:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:04.020 20:02:46 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:11:04.020 20:02:46 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:04.020 20:02:46 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:11:04.020 20:02:46 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:04.020 20:02:46 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:04.020 20:02:46 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:11:04.020 20:02:46 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:04.020 20:02:46 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:11:04.020 20:02:46 -- setup/devices.sh@50 -- # local mount_point= 00:11:04.020 20:02:46 -- setup/devices.sh@51 -- # local test_file= 00:11:04.020 20:02:46 -- setup/devices.sh@53 -- # local found=0 00:11:04.020 20:02:46 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:11:04.020 20:02:46 -- setup/devices.sh@59 -- # local pci status 00:11:04.020 20:02:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.020 20:02:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:04.020 20:02:46 -- setup/devices.sh@47 -- # setup output config 00:11:04.020 20:02:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:04.020 20:02:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:04.280 20:02:46 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.280 20:02:46 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:11:04.280 20:02:46 -- setup/devices.sh@63 -- # found=1 00:11:04.280 20:02:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.280 20:02:46 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.280 20:02:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.539 20:02:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.539 20:02:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.539 20:02:46 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.539 20:02:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.539 20:02:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:04.539 20:02:46 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:04.539 20:02:46 -- setup/devices.sh@68 -- # return 0 00:11:04.539 20:02:46 -- setup/devices.sh@187 -- # cleanup_dm 00:11:04.539 20:02:46 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:04.539 20:02:46 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:04.539 20:02:46 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:11:04.539 20:02:46 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:04.539 20:02:46 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:11:04.539 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:04.539 20:02:46 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:04.539 20:02:46 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:11:04.539 00:11:04.539 real 0m4.589s 00:11:04.539 user 0m0.535s 00:11:04.539 sys 0m1.011s 00:11:04.539 20:02:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:04.539 20:02:46 -- common/autotest_common.sh@10 -- # set +x 00:11:04.539 ************************************ 00:11:04.539 END TEST dm_mount 00:11:04.539 ************************************ 00:11:04.799 20:02:46 -- setup/devices.sh@1 -- # cleanup 00:11:04.799 20:02:46 -- setup/devices.sh@11 -- # cleanup_nvme 00:11:04.799 20:02:46 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:04.799 20:02:46 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:04.799 20:02:46 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:11:04.799 20:02:46 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:04.799 20:02:46 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:05.059 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:05.059 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:05.059 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:05.059 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:05.059 20:02:47 -- setup/devices.sh@12 -- # cleanup_dm 00:11:05.059 20:02:47 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:05.059 20:02:47 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:05.059 20:02:47 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:05.059 20:02:47 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:05.059 20:02:47 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:11:05.059 20:02:47 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:11:05.059 00:11:05.059 real 0m11.003s 00:11:05.059 user 0m2.072s 00:11:05.059 sys 0m3.338s 00:11:05.059 20:02:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:05.059 20:02:47 -- common/autotest_common.sh@10 -- # set +x 00:11:05.059 ************************************ 00:11:05.059 END TEST devices 00:11:05.059 ************************************ 00:11:05.059 00:11:05.059 real 0m25.872s 00:11:05.059 user 0m8.080s 00:11:05.059 sys 0m12.370s 00:11:05.059 20:02:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:05.059 20:02:47 -- common/autotest_common.sh@10 -- # set +x 00:11:05.059 ************************************ 00:11:05.059 END TEST setup.sh 00:11:05.059 ************************************ 00:11:05.059 20:02:47 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:05.998 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:05.998 Hugepages 00:11:05.998 node hugesize free / total 00:11:05.998 node0 1048576kB 0 / 0 00:11:05.998 node0 2048kB 2048 / 2048 00:11:05.998 00:11:05.998 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:05.998 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:11:05.998 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:11:06.258 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:11:06.258 20:02:48 -- spdk/autotest.sh@130 -- # uname -s 00:11:06.258 20:02:48 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:11:06.258 20:02:48 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:11:06.258 20:02:48 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:07.197 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:07.197 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:07.197 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:07.197 20:02:49 -- common/autotest_common.sh@1518 -- # sleep 1 00:11:08.136 20:02:50 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:08.136 20:02:50 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:08.136 20:02:50 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:08.136 20:02:50 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:08.136 20:02:50 -- common/autotest_common.sh@1499 -- # bdfs=() 00:11:08.136 20:02:50 -- common/autotest_common.sh@1499 -- # local bdfs 00:11:08.136 20:02:50 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:08.136 20:02:50 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:08.136 20:02:50 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:11:08.395 20:02:50 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:11:08.395 20:02:50 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:08.395 20:02:50 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:08.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:08.654 Waiting for block devices as requested 00:11:08.654 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:08.914 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:08.914 20:02:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:11:08.915 20:02:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:11:08.915 20:02:51 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:11:08.915 20:02:51 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:11:08.915 20:02:51 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:08.915 20:02:51 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:11:08.915 20:02:51 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:08.915 20:02:51 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:11:08.915 20:02:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:11:08.915 20:02:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:11:08.915 20:02:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:11:08.915 20:02:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:11:08.915 20:02:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:11:08.915 20:02:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:11:08.915 20:02:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:11:08.915 20:02:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:11:08.915 20:02:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:11:08.915 20:02:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:11:08.915 20:02:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:11:08.915 20:02:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:11:08.915 20:02:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:11:08.915 20:02:51 -- common/autotest_common.sh@1543 -- # continue 00:11:08.915 20:02:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:11:08.915 20:02:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:11:08.915 20:02:51 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:11:08.915 20:02:51 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:11:08.915 20:02:51 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:08.915 20:02:51 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:11:08.915 20:02:51 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:08.915 20:02:51 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:11:08.915 20:02:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:11:08.915 20:02:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:11:08.915 20:02:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:11:08.915 20:02:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:11:08.915 20:02:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:11:08.915 20:02:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:11:08.915 20:02:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:11:08.915 20:02:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:11:08.915 20:02:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:11:08.915 20:02:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:11:08.915 20:02:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:11:08.915 20:02:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:11:08.915 20:02:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:11:08.915 20:02:51 -- common/autotest_common.sh@1543 -- # continue 00:11:08.915 20:02:51 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:11:08.915 20:02:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:08.915 20:02:51 -- common/autotest_common.sh@10 -- # set +x 00:11:09.175 20:02:51 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:11:09.175 20:02:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:09.175 20:02:51 -- common/autotest_common.sh@10 -- # set +x 00:11:09.175 20:02:51 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:09.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:10.003 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:10.003 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:10.003 20:02:52 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:11:10.003 20:02:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:10.003 20:02:52 -- common/autotest_common.sh@10 -- # set +x 00:11:10.003 20:02:52 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:11:10.263 20:02:52 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:11:10.263 20:02:52 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:11:10.263 20:02:52 -- common/autotest_common.sh@1563 -- # bdfs=() 00:11:10.263 20:02:52 -- common/autotest_common.sh@1563 -- # local bdfs 00:11:10.263 20:02:52 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:11:10.263 20:02:52 -- common/autotest_common.sh@1499 -- # bdfs=() 00:11:10.263 20:02:52 -- common/autotest_common.sh@1499 -- # local bdfs 00:11:10.263 20:02:52 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:10.263 20:02:52 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:11:10.263 20:02:52 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:10.263 20:02:52 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:11:10.263 20:02:52 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:10.263 20:02:52 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:11:10.263 20:02:52 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:11:10.263 20:02:52 -- common/autotest_common.sh@1566 -- # device=0x0010 00:11:10.263 20:02:52 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:10.263 20:02:52 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:11:10.263 20:02:52 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:11:10.263 20:02:52 -- common/autotest_common.sh@1566 -- # device=0x0010 00:11:10.263 20:02:52 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:10.263 20:02:52 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:11:10.263 20:02:52 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:11:10.263 20:02:52 -- common/autotest_common.sh@1579 -- # return 0 00:11:10.263 20:02:52 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:11:10.263 20:02:52 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:11:10.263 20:02:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:10.263 20:02:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:10.263 20:02:52 -- spdk/autotest.sh@162 -- # timing_enter lib 00:11:10.263 20:02:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:10.263 20:02:52 -- common/autotest_common.sh@10 -- # set +x 00:11:10.263 20:02:52 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:10.263 20:02:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:10.263 20:02:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.263 20:02:52 -- common/autotest_common.sh@10 -- # set +x 00:11:10.263 ************************************ 00:11:10.263 START TEST env 00:11:10.263 ************************************ 00:11:10.263 20:02:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:10.522 * Looking for test storage... 00:11:10.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:11:10.522 20:02:52 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:10.522 20:02:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:10.522 20:02:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.522 20:02:52 -- common/autotest_common.sh@10 -- # set +x 00:11:10.522 ************************************ 00:11:10.522 START TEST env_memory 00:11:10.522 ************************************ 00:11:10.522 20:02:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:10.522 00:11:10.522 00:11:10.522 CUnit - A unit testing framework for C - Version 2.1-3 00:11:10.522 http://cunit.sourceforge.net/ 00:11:10.522 00:11:10.523 00:11:10.523 Suite: memory 00:11:10.523 Test: alloc and free memory map ...[2024-04-24 20:02:52.728929] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:11:10.523 passed 00:11:10.523 Test: mem map translation ...[2024-04-24 20:02:52.751548] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:11:10.523 [2024-04-24 20:02:52.751618] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:11:10.523 [2024-04-24 20:02:52.751666] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:11:10.523 [2024-04-24 20:02:52.751676] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:11:10.782 passed 00:11:10.782 Test: mem map registration ...[2024-04-24 20:02:52.804595] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:11:10.782 [2024-04-24 20:02:52.804691] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:11:10.782 passed 00:11:10.782 Test: mem map adjacent registrations ...passed 00:11:10.782 00:11:10.782 Run Summary: Type Total Ran Passed Failed Inactive 00:11:10.782 suites 1 1 n/a 0 0 00:11:10.782 tests 4 4 4 0 0 00:11:10.782 asserts 152 152 152 0 n/a 00:11:10.782 00:11:10.782 Elapsed time = 0.178 seconds 00:11:10.782 00:11:10.782 real 0m0.197s 00:11:10.782 user 0m0.181s 00:11:10.782 sys 0m0.013s 00:11:10.782 20:02:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:10.782 20:02:52 -- common/autotest_common.sh@10 -- # set +x 00:11:10.782 ************************************ 00:11:10.782 END TEST env_memory 00:11:10.782 ************************************ 00:11:10.782 20:02:52 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:10.782 20:02:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:10.782 20:02:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.782 20:02:52 -- common/autotest_common.sh@10 -- # set +x 00:11:10.782 ************************************ 00:11:10.782 START TEST env_vtophys 00:11:10.782 ************************************ 00:11:10.782 20:02:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:11.042 EAL: lib.eal log level changed from notice to debug 00:11:11.042 EAL: Detected lcore 0 as core 0 on socket 0 00:11:11.042 EAL: Detected lcore 1 as core 0 on socket 0 00:11:11.042 EAL: Detected lcore 2 as core 0 on socket 0 00:11:11.042 EAL: Detected lcore 3 as core 0 on socket 0 00:11:11.042 EAL: Detected lcore 4 as core 0 on socket 0 00:11:11.042 EAL: Detected lcore 5 as core 0 on socket 0 00:11:11.042 EAL: Detected lcore 6 as core 0 on socket 0 00:11:11.043 EAL: Detected lcore 7 as core 0 on socket 0 00:11:11.043 EAL: Detected lcore 8 as core 0 on socket 0 00:11:11.043 EAL: Detected lcore 9 as core 0 on socket 0 00:11:11.043 EAL: Maximum logical cores by configuration: 128 00:11:11.043 EAL: Detected CPU lcores: 10 00:11:11.043 EAL: Detected NUMA nodes: 1 00:11:11.043 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:11:11.043 EAL: Detected shared linkage of DPDK 00:11:11.043 EAL: No shared files mode enabled, IPC will be disabled 00:11:11.043 EAL: Selected IOVA mode 'PA' 00:11:11.043 EAL: Probing VFIO support... 00:11:11.043 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:11.043 EAL: VFIO modules not loaded, skipping VFIO support... 00:11:11.043 EAL: Ask a virtual area of 0x2e000 bytes 00:11:11.043 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:11:11.043 EAL: Setting up physically contiguous memory... 00:11:11.043 EAL: Setting maximum number of open files to 524288 00:11:11.043 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:11:11.043 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:11:11.043 EAL: Ask a virtual area of 0x61000 bytes 00:11:11.043 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:11:11.043 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:11.043 EAL: Ask a virtual area of 0x400000000 bytes 00:11:11.043 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:11:11.043 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:11:11.043 EAL: Ask a virtual area of 0x61000 bytes 00:11:11.043 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:11:11.043 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:11.043 EAL: Ask a virtual area of 0x400000000 bytes 00:11:11.043 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:11:11.043 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:11:11.043 EAL: Ask a virtual area of 0x61000 bytes 00:11:11.043 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:11:11.043 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:11.043 EAL: Ask a virtual area of 0x400000000 bytes 00:11:11.043 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:11:11.043 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:11:11.043 EAL: Ask a virtual area of 0x61000 bytes 00:11:11.043 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:11:11.043 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:11.043 EAL: Ask a virtual area of 0x400000000 bytes 00:11:11.043 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:11:11.043 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:11:11.043 EAL: Hugepages will be freed exactly as allocated. 00:11:11.043 EAL: No shared files mode enabled, IPC is disabled 00:11:11.043 EAL: No shared files mode enabled, IPC is disabled 00:11:11.043 EAL: TSC frequency is ~2290000 KHz 00:11:11.043 EAL: Main lcore 0 is ready (tid=7f137770fa00;cpuset=[0]) 00:11:11.043 EAL: Trying to obtain current memory policy. 00:11:11.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:11.043 EAL: Restoring previous memory policy: 0 00:11:11.043 EAL: request: mp_malloc_sync 00:11:11.043 EAL: No shared files mode enabled, IPC is disabled 00:11:11.043 EAL: Heap on socket 0 was expanded by 2MB 00:11:11.043 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:11.043 EAL: No PCI address specified using 'addr=' in: bus=pci 00:11:11.043 EAL: Mem event callback 'spdk:(nil)' registered 00:11:11.043 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:11:11.043 00:11:11.043 00:11:11.043 CUnit - A unit testing framework for C - Version 2.1-3 00:11:11.043 http://cunit.sourceforge.net/ 00:11:11.043 00:11:11.043 00:11:11.043 Suite: components_suite 00:11:11.043 Test: vtophys_malloc_test ...passed 00:11:11.043 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:11:11.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:11.043 EAL: Restoring previous memory policy: 4 00:11:11.043 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.043 EAL: request: mp_malloc_sync 00:11:11.043 EAL: No shared files mode enabled, IPC is disabled 00:11:11.043 EAL: Heap on socket 0 was expanded by 4MB 00:11:11.043 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.043 EAL: request: mp_malloc_sync 00:11:11.043 EAL: No shared files mode enabled, IPC is disabled 00:11:11.043 EAL: Heap on socket 0 was shrunk by 4MB 00:11:11.043 EAL: Trying to obtain current memory policy. 00:11:11.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:11.043 EAL: Restoring previous memory policy: 4 00:11:11.043 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.043 EAL: request: mp_malloc_sync 00:11:11.043 EAL: No shared files mode enabled, IPC is disabled 00:11:11.043 EAL: Heap on socket 0 was expanded by 6MB 00:11:11.043 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.043 EAL: request: mp_malloc_sync 00:11:11.043 EAL: No shared files mode enabled, IPC is disabled 00:11:11.043 EAL: Heap on socket 0 was shrunk by 6MB 00:11:11.043 EAL: Trying to obtain current memory policy. 00:11:11.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:11.043 EAL: Restoring previous memory policy: 4 00:11:11.043 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.043 EAL: request: mp_malloc_sync 00:11:11.043 EAL: No shared files mode enabled, IPC is disabled 00:11:11.043 EAL: Heap on socket 0 was expanded by 10MB 00:11:11.043 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.043 EAL: request: mp_malloc_sync 00:11:11.043 EAL: No shared files mode enabled, IPC is disabled 00:11:11.043 EAL: Heap on socket 0 was shrunk by 10MB 00:11:11.043 EAL: Trying to obtain current memory policy. 00:11:11.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:11.043 EAL: Restoring previous memory policy: 4 00:11:11.043 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.043 EAL: request: mp_malloc_sync 00:11:11.043 EAL: No shared files mode enabled, IPC is disabled 00:11:11.043 EAL: Heap on socket 0 was expanded by 18MB 00:11:11.043 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.043 EAL: request: mp_malloc_sync 00:11:11.043 EAL: No shared files mode enabled, IPC is disabled 00:11:11.043 EAL: Heap on socket 0 was shrunk by 18MB 00:11:11.043 EAL: Trying to obtain current memory policy. 00:11:11.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:11.043 EAL: Restoring previous memory policy: 4 00:11:11.043 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.043 EAL: request: mp_malloc_sync 00:11:11.043 EAL: No shared files mode enabled, IPC is disabled 00:11:11.043 EAL: Heap on socket 0 was expanded by 34MB 00:11:11.043 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.043 EAL: request: mp_malloc_sync 00:11:11.043 EAL: No shared files mode enabled, IPC is disabled 00:11:11.043 EAL: Heap on socket 0 was shrunk by 34MB 00:11:11.043 EAL: Trying to obtain current memory policy. 00:11:11.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:11.043 EAL: Restoring previous memory policy: 4 00:11:11.043 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.043 EAL: request: mp_malloc_sync 00:11:11.043 EAL: No shared files mode enabled, IPC is disabled 00:11:11.043 EAL: Heap on socket 0 was expanded by 66MB 00:11:11.043 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.043 EAL: request: mp_malloc_sync 00:11:11.043 EAL: No shared files mode enabled, IPC is disabled 00:11:11.043 EAL: Heap on socket 0 was shrunk by 66MB 00:11:11.043 EAL: Trying to obtain current memory policy. 00:11:11.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:11.303 EAL: Restoring previous memory policy: 4 00:11:11.303 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.303 EAL: request: mp_malloc_sync 00:11:11.303 EAL: No shared files mode enabled, IPC is disabled 00:11:11.303 EAL: Heap on socket 0 was expanded by 130MB 00:11:11.303 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.303 EAL: request: mp_malloc_sync 00:11:11.303 EAL: No shared files mode enabled, IPC is disabled 00:11:11.303 EAL: Heap on socket 0 was shrunk by 130MB 00:11:11.303 EAL: Trying to obtain current memory policy. 00:11:11.303 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:11.303 EAL: Restoring previous memory policy: 4 00:11:11.303 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.303 EAL: request: mp_malloc_sync 00:11:11.303 EAL: No shared files mode enabled, IPC is disabled 00:11:11.303 EAL: Heap on socket 0 was expanded by 258MB 00:11:11.303 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.303 EAL: request: mp_malloc_sync 00:11:11.303 EAL: No shared files mode enabled, IPC is disabled 00:11:11.303 EAL: Heap on socket 0 was shrunk by 258MB 00:11:11.303 EAL: Trying to obtain current memory policy. 00:11:11.303 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:11.562 EAL: Restoring previous memory policy: 4 00:11:11.562 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.562 EAL: request: mp_malloc_sync 00:11:11.562 EAL: No shared files mode enabled, IPC is disabled 00:11:11.562 EAL: Heap on socket 0 was expanded by 514MB 00:11:11.562 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.562 EAL: request: mp_malloc_sync 00:11:11.562 EAL: No shared files mode enabled, IPC is disabled 00:11:11.562 EAL: Heap on socket 0 was shrunk by 514MB 00:11:11.562 EAL: Trying to obtain current memory policy. 00:11:11.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:11.821 EAL: Restoring previous memory policy: 4 00:11:11.821 EAL: Calling mem event callback 'spdk:(nil)' 00:11:11.822 EAL: request: mp_malloc_sync 00:11:11.822 EAL: No shared files mode enabled, IPC is disabled 00:11:11.822 EAL: Heap on socket 0 was expanded by 1026MB 00:11:11.822 EAL: Calling mem event callback 'spdk:(nil)' 00:11:12.081 passed 00:11:12.081 00:11:12.081 Run Summary: Type Total Ran Passed Failed Inactive 00:11:12.081 suites 1 1 n/a 0 0 00:11:12.081 tests 2 2 2 0 0 00:11:12.081 asserts 5386 5386 5386 0 n/a 00:11:12.081 00:11:12.081 Elapsed time = 1.030 seconds 00:11:12.081 EAL: request: mp_malloc_sync 00:11:12.081 EAL: No shared files mode enabled, IPC is disabled 00:11:12.081 EAL: Heap on socket 0 was shrunk by 1026MB 00:11:12.081 EAL: Calling mem event callback 'spdk:(nil)' 00:11:12.081 EAL: request: mp_malloc_sync 00:11:12.081 EAL: No shared files mode enabled, IPC is disabled 00:11:12.081 EAL: Heap on socket 0 was shrunk by 2MB 00:11:12.081 EAL: No shared files mode enabled, IPC is disabled 00:11:12.081 EAL: No shared files mode enabled, IPC is disabled 00:11:12.081 EAL: No shared files mode enabled, IPC is disabled 00:11:12.081 00:11:12.081 real 0m1.218s 00:11:12.081 user 0m0.664s 00:11:12.081 sys 0m0.425s 00:11:12.081 20:02:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:12.081 20:02:54 -- common/autotest_common.sh@10 -- # set +x 00:11:12.081 ************************************ 00:11:12.081 END TEST env_vtophys 00:11:12.081 ************************************ 00:11:12.081 20:02:54 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:12.081 20:02:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:12.081 20:02:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:12.081 20:02:54 -- common/autotest_common.sh@10 -- # set +x 00:11:12.341 ************************************ 00:11:12.341 START TEST env_pci 00:11:12.341 ************************************ 00:11:12.341 20:02:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:12.341 00:11:12.341 00:11:12.341 CUnit - A unit testing framework for C - Version 2.1-3 00:11:12.341 http://cunit.sourceforge.net/ 00:11:12.341 00:11:12.341 00:11:12.341 Suite: pci 00:11:12.341 Test: pci_hook ...[2024-04-24 20:02:54.398902] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58297 has claimed it 00:11:12.341 passed 00:11:12.341 00:11:12.341 Run Summary: Type Total Ran Passed Failed Inactive 00:11:12.341 suites 1 1 n/a 0 0 00:11:12.341 tests 1 1 1 0 0 00:11:12.341 asserts 25 25 25 0 n/a 00:11:12.341 00:11:12.341 Elapsed time = 0.002 seconds 00:11:12.341 EAL: Cannot find device (10000:00:01.0) 00:11:12.341 EAL: Failed to attach device on primary process 00:11:12.341 00:11:12.341 real 0m0.019s 00:11:12.342 user 0m0.010s 00:11:12.342 sys 0m0.009s 00:11:12.342 20:02:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:12.342 ************************************ 00:11:12.342 END TEST env_pci 00:11:12.342 ************************************ 00:11:12.342 20:02:54 -- common/autotest_common.sh@10 -- # set +x 00:11:12.342 20:02:54 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:11:12.342 20:02:54 -- env/env.sh@15 -- # uname 00:11:12.342 20:02:54 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:11:12.342 20:02:54 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:11:12.342 20:02:54 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:12.342 20:02:54 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:11:12.342 20:02:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:12.342 20:02:54 -- common/autotest_common.sh@10 -- # set +x 00:11:12.342 ************************************ 00:11:12.342 START TEST env_dpdk_post_init 00:11:12.342 ************************************ 00:11:12.342 20:02:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:12.602 EAL: Detected CPU lcores: 10 00:11:12.602 EAL: Detected NUMA nodes: 1 00:11:12.602 EAL: Detected shared linkage of DPDK 00:11:12.602 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:12.602 EAL: Selected IOVA mode 'PA' 00:11:12.602 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:12.602 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:11:12.602 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:11:12.602 Starting DPDK initialization... 00:11:12.602 Starting SPDK post initialization... 00:11:12.602 SPDK NVMe probe 00:11:12.602 Attaching to 0000:00:10.0 00:11:12.602 Attaching to 0000:00:11.0 00:11:12.602 Attached to 0000:00:10.0 00:11:12.602 Attached to 0000:00:11.0 00:11:12.602 Cleaning up... 00:11:12.602 00:11:12.602 real 0m0.192s 00:11:12.602 user 0m0.051s 00:11:12.602 sys 0m0.042s 00:11:12.602 20:02:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:12.602 20:02:54 -- common/autotest_common.sh@10 -- # set +x 00:11:12.602 ************************************ 00:11:12.602 END TEST env_dpdk_post_init 00:11:12.602 ************************************ 00:11:12.602 20:02:54 -- env/env.sh@26 -- # uname 00:11:12.602 20:02:54 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:11:12.602 20:02:54 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:12.602 20:02:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:12.602 20:02:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:12.602 20:02:54 -- common/autotest_common.sh@10 -- # set +x 00:11:12.862 ************************************ 00:11:12.862 START TEST env_mem_callbacks 00:11:12.862 ************************************ 00:11:12.862 20:02:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:12.862 EAL: Detected CPU lcores: 10 00:11:12.862 EAL: Detected NUMA nodes: 1 00:11:12.862 EAL: Detected shared linkage of DPDK 00:11:12.862 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:12.862 EAL: Selected IOVA mode 'PA' 00:11:12.862 00:11:12.862 00:11:12.862 CUnit - A unit testing framework for C - Version 2.1-3 00:11:12.862 http://cunit.sourceforge.net/ 00:11:12.862 00:11:12.862 00:11:12.862 Suite: memory 00:11:12.862 Test: test ... 00:11:12.862 register 0x200000200000 2097152 00:11:12.862 malloc 3145728 00:11:12.862 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:12.862 register 0x200000400000 4194304 00:11:12.862 buf 0x200000500000 len 3145728 PASSED 00:11:12.862 malloc 64 00:11:12.862 buf 0x2000004fff40 len 64 PASSED 00:11:12.862 malloc 4194304 00:11:12.862 register 0x200000800000 6291456 00:11:12.862 buf 0x200000a00000 len 4194304 PASSED 00:11:12.862 free 0x200000500000 3145728 00:11:12.862 free 0x2000004fff40 64 00:11:12.862 unregister 0x200000400000 4194304 PASSED 00:11:12.862 free 0x200000a00000 4194304 00:11:12.862 unregister 0x200000800000 6291456 PASSED 00:11:12.862 malloc 8388608 00:11:12.862 register 0x200000400000 10485760 00:11:12.862 buf 0x200000600000 len 8388608 PASSED 00:11:12.862 free 0x200000600000 8388608 00:11:12.862 unregister 0x200000400000 10485760 PASSED 00:11:12.862 passed 00:11:12.862 00:11:12.862 Run Summary: Type Total Ran Passed Failed Inactive 00:11:12.862 suites 1 1 n/a 0 0 00:11:12.862 tests 1 1 1 0 0 00:11:12.862 asserts 15 15 15 0 n/a 00:11:12.862 00:11:12.862 Elapsed time = 0.012 seconds 00:11:12.862 00:11:12.862 real 0m0.152s 00:11:12.862 user 0m0.024s 00:11:12.862 sys 0m0.026s 00:11:12.862 20:02:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:12.862 20:02:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.862 ************************************ 00:11:12.862 END TEST env_mem_callbacks 00:11:12.862 ************************************ 00:11:12.862 00:11:12.862 real 0m2.659s 00:11:12.862 user 0m1.241s 00:11:12.862 sys 0m1.020s 00:11:12.862 20:02:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:12.862 20:02:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.862 ************************************ 00:11:12.862 END TEST env 00:11:12.862 ************************************ 00:11:13.122 20:02:55 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:13.122 20:02:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:13.122 20:02:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:13.122 20:02:55 -- common/autotest_common.sh@10 -- # set +x 00:11:13.122 ************************************ 00:11:13.122 START TEST rpc 00:11:13.122 ************************************ 00:11:13.122 20:02:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:13.122 * Looking for test storage... 00:11:13.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:13.122 20:02:55 -- rpc/rpc.sh@65 -- # spdk_pid=58427 00:11:13.122 20:02:55 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:11:13.122 20:02:55 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:13.122 20:02:55 -- rpc/rpc.sh@67 -- # waitforlisten 58427 00:11:13.122 20:02:55 -- common/autotest_common.sh@817 -- # '[' -z 58427 ']' 00:11:13.122 20:02:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.122 20:02:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:13.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.122 20:02:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.122 20:02:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:13.122 20:02:55 -- common/autotest_common.sh@10 -- # set +x 00:11:13.381 [2024-04-24 20:02:55.423179] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:13.381 [2024-04-24 20:02:55.423250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58427 ] 00:11:13.381 [2024-04-24 20:02:55.548077] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.640 [2024-04-24 20:02:55.653188] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:11:13.640 [2024-04-24 20:02:55.653241] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58427' to capture a snapshot of events at runtime. 00:11:13.640 [2024-04-24 20:02:55.653248] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.640 [2024-04-24 20:02:55.653253] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.640 [2024-04-24 20:02:55.653257] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58427 for offline analysis/debug. 00:11:13.640 [2024-04-24 20:02:55.653300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.207 20:02:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:14.207 20:02:56 -- common/autotest_common.sh@850 -- # return 0 00:11:14.207 20:02:56 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:14.207 20:02:56 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:14.207 20:02:56 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:14.207 20:02:56 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:14.207 20:02:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:14.207 20:02:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:14.207 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.207 ************************************ 00:11:14.207 START TEST rpc_integrity 00:11:14.207 ************************************ 00:11:14.207 20:02:56 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:11:14.207 20:02:56 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.207 20:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.207 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.207 20:02:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.207 20:02:56 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:14.207 20:02:56 -- rpc/rpc.sh@13 -- # jq length 00:11:14.467 20:02:56 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:14.467 20:02:56 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:14.467 20:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.467 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.467 20:02:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.467 20:02:56 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:14.467 20:02:56 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:14.467 20:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.467 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.467 20:02:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.467 20:02:56 -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:14.467 { 00:11:14.467 "name": "Malloc0", 00:11:14.467 "aliases": [ 00:11:14.467 "2dd859dc-9a1b-4982-85a9-81e7545b6e9d" 00:11:14.467 ], 00:11:14.467 "product_name": "Malloc disk", 00:11:14.467 "block_size": 512, 00:11:14.467 "num_blocks": 16384, 00:11:14.467 "uuid": "2dd859dc-9a1b-4982-85a9-81e7545b6e9d", 00:11:14.467 "assigned_rate_limits": { 00:11:14.467 "rw_ios_per_sec": 0, 00:11:14.467 "rw_mbytes_per_sec": 0, 00:11:14.467 "r_mbytes_per_sec": 0, 00:11:14.467 "w_mbytes_per_sec": 0 00:11:14.467 }, 00:11:14.467 "claimed": false, 00:11:14.467 "zoned": false, 00:11:14.467 "supported_io_types": { 00:11:14.467 "read": true, 00:11:14.467 "write": true, 00:11:14.467 "unmap": true, 00:11:14.467 "write_zeroes": true, 00:11:14.467 "flush": true, 00:11:14.467 "reset": true, 00:11:14.467 "compare": false, 00:11:14.467 "compare_and_write": false, 00:11:14.467 "abort": true, 00:11:14.467 "nvme_admin": false, 00:11:14.467 "nvme_io": false 00:11:14.467 }, 00:11:14.467 "memory_domains": [ 00:11:14.467 { 00:11:14.467 "dma_device_id": "system", 00:11:14.467 "dma_device_type": 1 00:11:14.467 }, 00:11:14.467 { 00:11:14.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.467 "dma_device_type": 2 00:11:14.467 } 00:11:14.467 ], 00:11:14.467 "driver_specific": {} 00:11:14.467 } 00:11:14.467 ]' 00:11:14.467 20:02:56 -- rpc/rpc.sh@17 -- # jq length 00:11:14.467 20:02:56 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:14.467 20:02:56 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:14.467 20:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.467 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.467 [2024-04-24 20:02:56.588648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:14.467 [2024-04-24 20:02:56.588728] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.467 [2024-04-24 20:02:56.588743] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd87be0 00:11:14.467 [2024-04-24 20:02:56.588749] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.467 [2024-04-24 20:02:56.590221] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.467 [2024-04-24 20:02:56.590263] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:14.467 Passthru0 00:11:14.467 20:02:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.467 20:02:56 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:14.467 20:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.467 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.467 20:02:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.467 20:02:56 -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:14.467 { 00:11:14.467 "name": "Malloc0", 00:11:14.467 "aliases": [ 00:11:14.467 "2dd859dc-9a1b-4982-85a9-81e7545b6e9d" 00:11:14.467 ], 00:11:14.467 "product_name": "Malloc disk", 00:11:14.467 "block_size": 512, 00:11:14.467 "num_blocks": 16384, 00:11:14.467 "uuid": "2dd859dc-9a1b-4982-85a9-81e7545b6e9d", 00:11:14.467 "assigned_rate_limits": { 00:11:14.467 "rw_ios_per_sec": 0, 00:11:14.467 "rw_mbytes_per_sec": 0, 00:11:14.467 "r_mbytes_per_sec": 0, 00:11:14.467 "w_mbytes_per_sec": 0 00:11:14.467 }, 00:11:14.467 "claimed": true, 00:11:14.467 "claim_type": "exclusive_write", 00:11:14.467 "zoned": false, 00:11:14.467 "supported_io_types": { 00:11:14.467 "read": true, 00:11:14.467 "write": true, 00:11:14.467 "unmap": true, 00:11:14.467 "write_zeroes": true, 00:11:14.467 "flush": true, 00:11:14.467 "reset": true, 00:11:14.467 "compare": false, 00:11:14.467 "compare_and_write": false, 00:11:14.467 "abort": true, 00:11:14.467 "nvme_admin": false, 00:11:14.467 "nvme_io": false 00:11:14.467 }, 00:11:14.467 "memory_domains": [ 00:11:14.467 { 00:11:14.467 "dma_device_id": "system", 00:11:14.467 "dma_device_type": 1 00:11:14.467 }, 00:11:14.467 { 00:11:14.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.467 "dma_device_type": 2 00:11:14.467 } 00:11:14.467 ], 00:11:14.467 "driver_specific": {} 00:11:14.467 }, 00:11:14.467 { 00:11:14.467 "name": "Passthru0", 00:11:14.467 "aliases": [ 00:11:14.467 "bbce82d1-28bf-5f4e-9e5b-440b0a19a165" 00:11:14.467 ], 00:11:14.467 "product_name": "passthru", 00:11:14.467 "block_size": 512, 00:11:14.467 "num_blocks": 16384, 00:11:14.467 "uuid": "bbce82d1-28bf-5f4e-9e5b-440b0a19a165", 00:11:14.467 "assigned_rate_limits": { 00:11:14.467 "rw_ios_per_sec": 0, 00:11:14.467 "rw_mbytes_per_sec": 0, 00:11:14.467 "r_mbytes_per_sec": 0, 00:11:14.467 "w_mbytes_per_sec": 0 00:11:14.467 }, 00:11:14.467 "claimed": false, 00:11:14.467 "zoned": false, 00:11:14.467 "supported_io_types": { 00:11:14.467 "read": true, 00:11:14.467 "write": true, 00:11:14.467 "unmap": true, 00:11:14.467 "write_zeroes": true, 00:11:14.467 "flush": true, 00:11:14.467 "reset": true, 00:11:14.467 "compare": false, 00:11:14.467 "compare_and_write": false, 00:11:14.467 "abort": true, 00:11:14.467 "nvme_admin": false, 00:11:14.467 "nvme_io": false 00:11:14.467 }, 00:11:14.467 "memory_domains": [ 00:11:14.467 { 00:11:14.467 "dma_device_id": "system", 00:11:14.467 "dma_device_type": 1 00:11:14.467 }, 00:11:14.467 { 00:11:14.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.467 "dma_device_type": 2 00:11:14.467 } 00:11:14.467 ], 00:11:14.467 "driver_specific": { 00:11:14.467 "passthru": { 00:11:14.467 "name": "Passthru0", 00:11:14.467 "base_bdev_name": "Malloc0" 00:11:14.467 } 00:11:14.467 } 00:11:14.467 } 00:11:14.467 ]' 00:11:14.467 20:02:56 -- rpc/rpc.sh@21 -- # jq length 00:11:14.467 20:02:56 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:14.467 20:02:56 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:14.467 20:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.467 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.467 20:02:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.467 20:02:56 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:14.467 20:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.467 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.467 20:02:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.467 20:02:56 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:14.467 20:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.467 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.467 20:02:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.467 20:02:56 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:14.467 20:02:56 -- rpc/rpc.sh@26 -- # jq length 00:11:14.727 ************************************ 00:11:14.727 END TEST rpc_integrity 00:11:14.727 ************************************ 00:11:14.727 20:02:56 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:14.727 00:11:14.727 real 0m0.327s 00:11:14.727 user 0m0.199s 00:11:14.727 sys 0m0.052s 00:11:14.727 20:02:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:14.727 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.727 20:02:56 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:14.727 20:02:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:14.727 20:02:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:14.727 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.727 ************************************ 00:11:14.727 START TEST rpc_plugins 00:11:14.727 ************************************ 00:11:14.727 20:02:56 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:11:14.727 20:02:56 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:14.727 20:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.727 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.727 20:02:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.727 20:02:56 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:14.727 20:02:56 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:14.727 20:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.727 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.727 20:02:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.727 20:02:56 -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:14.727 { 00:11:14.727 "name": "Malloc1", 00:11:14.727 "aliases": [ 00:11:14.727 "88356767-8f4f-44ac-a372-a1a5cd7a84ae" 00:11:14.727 ], 00:11:14.727 "product_name": "Malloc disk", 00:11:14.727 "block_size": 4096, 00:11:14.727 "num_blocks": 256, 00:11:14.727 "uuid": "88356767-8f4f-44ac-a372-a1a5cd7a84ae", 00:11:14.727 "assigned_rate_limits": { 00:11:14.727 "rw_ios_per_sec": 0, 00:11:14.727 "rw_mbytes_per_sec": 0, 00:11:14.727 "r_mbytes_per_sec": 0, 00:11:14.727 "w_mbytes_per_sec": 0 00:11:14.727 }, 00:11:14.727 "claimed": false, 00:11:14.727 "zoned": false, 00:11:14.727 "supported_io_types": { 00:11:14.727 "read": true, 00:11:14.727 "write": true, 00:11:14.727 "unmap": true, 00:11:14.727 "write_zeroes": true, 00:11:14.727 "flush": true, 00:11:14.727 "reset": true, 00:11:14.727 "compare": false, 00:11:14.727 "compare_and_write": false, 00:11:14.727 "abort": true, 00:11:14.727 "nvme_admin": false, 00:11:14.727 "nvme_io": false 00:11:14.727 }, 00:11:14.727 "memory_domains": [ 00:11:14.727 { 00:11:14.727 "dma_device_id": "system", 00:11:14.727 "dma_device_type": 1 00:11:14.727 }, 00:11:14.727 { 00:11:14.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.727 "dma_device_type": 2 00:11:14.727 } 00:11:14.727 ], 00:11:14.727 "driver_specific": {} 00:11:14.727 } 00:11:14.727 ]' 00:11:14.727 20:02:56 -- rpc/rpc.sh@32 -- # jq length 00:11:14.727 20:02:56 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:14.727 20:02:56 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:14.727 20:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.727 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.986 20:02:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.986 20:02:56 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:14.986 20:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.986 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:11:14.986 20:02:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.986 20:02:56 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:14.986 20:02:56 -- rpc/rpc.sh@36 -- # jq length 00:11:14.986 ************************************ 00:11:14.986 END TEST rpc_plugins 00:11:14.986 ************************************ 00:11:14.986 20:02:57 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:14.986 00:11:14.986 real 0m0.153s 00:11:14.986 user 0m0.099s 00:11:14.986 sys 0m0.015s 00:11:14.986 20:02:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:14.986 20:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:14.986 20:02:57 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:14.986 20:02:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:14.986 20:02:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:14.986 20:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:14.986 ************************************ 00:11:14.986 START TEST rpc_trace_cmd_test 00:11:14.986 ************************************ 00:11:14.986 20:02:57 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:11:14.986 20:02:57 -- rpc/rpc.sh@40 -- # local info 00:11:14.986 20:02:57 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:14.986 20:02:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.986 20:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:14.986 20:02:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.986 20:02:57 -- rpc/rpc.sh@42 -- # info='{ 00:11:14.986 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58427", 00:11:14.986 "tpoint_group_mask": "0x8", 00:11:14.986 "iscsi_conn": { 00:11:14.986 "mask": "0x2", 00:11:14.986 "tpoint_mask": "0x0" 00:11:14.986 }, 00:11:14.986 "scsi": { 00:11:14.986 "mask": "0x4", 00:11:14.986 "tpoint_mask": "0x0" 00:11:14.986 }, 00:11:14.986 "bdev": { 00:11:14.986 "mask": "0x8", 00:11:14.986 "tpoint_mask": "0xffffffffffffffff" 00:11:14.986 }, 00:11:14.986 "nvmf_rdma": { 00:11:14.986 "mask": "0x10", 00:11:14.986 "tpoint_mask": "0x0" 00:11:14.986 }, 00:11:14.986 "nvmf_tcp": { 00:11:14.986 "mask": "0x20", 00:11:14.986 "tpoint_mask": "0x0" 00:11:14.986 }, 00:11:14.986 "ftl": { 00:11:14.986 "mask": "0x40", 00:11:14.986 "tpoint_mask": "0x0" 00:11:14.986 }, 00:11:14.986 "blobfs": { 00:11:14.986 "mask": "0x80", 00:11:14.986 "tpoint_mask": "0x0" 00:11:14.986 }, 00:11:14.986 "dsa": { 00:11:14.986 "mask": "0x200", 00:11:14.986 "tpoint_mask": "0x0" 00:11:14.986 }, 00:11:14.986 "thread": { 00:11:14.986 "mask": "0x400", 00:11:14.986 "tpoint_mask": "0x0" 00:11:14.986 }, 00:11:14.986 "nvme_pcie": { 00:11:14.986 "mask": "0x800", 00:11:14.986 "tpoint_mask": "0x0" 00:11:14.986 }, 00:11:14.986 "iaa": { 00:11:14.986 "mask": "0x1000", 00:11:14.986 "tpoint_mask": "0x0" 00:11:14.986 }, 00:11:14.986 "nvme_tcp": { 00:11:14.986 "mask": "0x2000", 00:11:14.986 "tpoint_mask": "0x0" 00:11:14.986 }, 00:11:14.986 "bdev_nvme": { 00:11:14.986 "mask": "0x4000", 00:11:14.986 "tpoint_mask": "0x0" 00:11:14.986 }, 00:11:14.986 "sock": { 00:11:14.986 "mask": "0x8000", 00:11:14.986 "tpoint_mask": "0x0" 00:11:14.986 } 00:11:14.986 }' 00:11:14.986 20:02:57 -- rpc/rpc.sh@43 -- # jq length 00:11:15.245 20:02:57 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:11:15.245 20:02:57 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:11:15.245 20:02:57 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:11:15.245 20:02:57 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:11:15.245 20:02:57 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:11:15.245 20:02:57 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:11:15.245 20:02:57 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:11:15.245 20:02:57 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:11:15.245 ************************************ 00:11:15.245 END TEST rpc_trace_cmd_test 00:11:15.245 ************************************ 00:11:15.245 20:02:57 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:11:15.245 00:11:15.245 real 0m0.269s 00:11:15.245 user 0m0.212s 00:11:15.245 sys 0m0.047s 00:11:15.245 20:02:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:15.245 20:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:15.503 20:02:57 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:11:15.503 20:02:57 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:11:15.503 20:02:57 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:11:15.503 20:02:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:15.503 20:02:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:15.503 20:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:15.503 ************************************ 00:11:15.503 START TEST rpc_daemon_integrity 00:11:15.503 ************************************ 00:11:15.503 20:02:57 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:11:15.503 20:02:57 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.503 20:02:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.503 20:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:15.503 20:02:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.503 20:02:57 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:15.503 20:02:57 -- rpc/rpc.sh@13 -- # jq length 00:11:15.503 20:02:57 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:15.503 20:02:57 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:15.503 20:02:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.503 20:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:15.503 20:02:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.503 20:02:57 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:11:15.503 20:02:57 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:15.503 20:02:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.503 20:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:15.503 20:02:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.503 20:02:57 -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:15.503 { 00:11:15.503 "name": "Malloc2", 00:11:15.503 "aliases": [ 00:11:15.503 "a667f96e-9229-4d03-bf7c-aefa7eccf652" 00:11:15.503 ], 00:11:15.503 "product_name": "Malloc disk", 00:11:15.503 "block_size": 512, 00:11:15.503 "num_blocks": 16384, 00:11:15.503 "uuid": "a667f96e-9229-4d03-bf7c-aefa7eccf652", 00:11:15.503 "assigned_rate_limits": { 00:11:15.503 "rw_ios_per_sec": 0, 00:11:15.503 "rw_mbytes_per_sec": 0, 00:11:15.504 "r_mbytes_per_sec": 0, 00:11:15.504 "w_mbytes_per_sec": 0 00:11:15.504 }, 00:11:15.504 "claimed": false, 00:11:15.504 "zoned": false, 00:11:15.504 "supported_io_types": { 00:11:15.504 "read": true, 00:11:15.504 "write": true, 00:11:15.504 "unmap": true, 00:11:15.504 "write_zeroes": true, 00:11:15.504 "flush": true, 00:11:15.504 "reset": true, 00:11:15.504 "compare": false, 00:11:15.504 "compare_and_write": false, 00:11:15.504 "abort": true, 00:11:15.504 "nvme_admin": false, 00:11:15.504 "nvme_io": false 00:11:15.504 }, 00:11:15.504 "memory_domains": [ 00:11:15.504 { 00:11:15.504 "dma_device_id": "system", 00:11:15.504 "dma_device_type": 1 00:11:15.504 }, 00:11:15.504 { 00:11:15.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.504 "dma_device_type": 2 00:11:15.504 } 00:11:15.504 ], 00:11:15.504 "driver_specific": {} 00:11:15.504 } 00:11:15.504 ]' 00:11:15.504 20:02:57 -- rpc/rpc.sh@17 -- # jq length 00:11:15.504 20:02:57 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:15.504 20:02:57 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:11:15.504 20:02:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.504 20:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:15.504 [2024-04-24 20:02:57.742796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:11:15.504 [2024-04-24 20:02:57.742861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.504 [2024-04-24 20:02:57.742878] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdddc90 00:11:15.504 [2024-04-24 20:02:57.742885] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.504 [2024-04-24 20:02:57.744260] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.504 [2024-04-24 20:02:57.744298] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:15.504 Passthru0 00:11:15.504 20:02:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.504 20:02:57 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:15.504 20:02:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.504 20:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:15.762 20:02:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.762 20:02:57 -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:15.762 { 00:11:15.762 "name": "Malloc2", 00:11:15.762 "aliases": [ 00:11:15.762 "a667f96e-9229-4d03-bf7c-aefa7eccf652" 00:11:15.762 ], 00:11:15.762 "product_name": "Malloc disk", 00:11:15.762 "block_size": 512, 00:11:15.762 "num_blocks": 16384, 00:11:15.762 "uuid": "a667f96e-9229-4d03-bf7c-aefa7eccf652", 00:11:15.762 "assigned_rate_limits": { 00:11:15.762 "rw_ios_per_sec": 0, 00:11:15.762 "rw_mbytes_per_sec": 0, 00:11:15.762 "r_mbytes_per_sec": 0, 00:11:15.762 "w_mbytes_per_sec": 0 00:11:15.762 }, 00:11:15.762 "claimed": true, 00:11:15.762 "claim_type": "exclusive_write", 00:11:15.762 "zoned": false, 00:11:15.762 "supported_io_types": { 00:11:15.762 "read": true, 00:11:15.762 "write": true, 00:11:15.762 "unmap": true, 00:11:15.762 "write_zeroes": true, 00:11:15.762 "flush": true, 00:11:15.762 "reset": true, 00:11:15.762 "compare": false, 00:11:15.762 "compare_and_write": false, 00:11:15.762 "abort": true, 00:11:15.762 "nvme_admin": false, 00:11:15.762 "nvme_io": false 00:11:15.762 }, 00:11:15.762 "memory_domains": [ 00:11:15.762 { 00:11:15.762 "dma_device_id": "system", 00:11:15.762 "dma_device_type": 1 00:11:15.762 }, 00:11:15.762 { 00:11:15.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.762 "dma_device_type": 2 00:11:15.762 } 00:11:15.762 ], 00:11:15.762 "driver_specific": {} 00:11:15.762 }, 00:11:15.762 { 00:11:15.762 "name": "Passthru0", 00:11:15.762 "aliases": [ 00:11:15.762 "90eeed60-afee-593a-8e31-3d629c40173b" 00:11:15.762 ], 00:11:15.762 "product_name": "passthru", 00:11:15.762 "block_size": 512, 00:11:15.762 "num_blocks": 16384, 00:11:15.762 "uuid": "90eeed60-afee-593a-8e31-3d629c40173b", 00:11:15.762 "assigned_rate_limits": { 00:11:15.762 "rw_ios_per_sec": 0, 00:11:15.762 "rw_mbytes_per_sec": 0, 00:11:15.762 "r_mbytes_per_sec": 0, 00:11:15.762 "w_mbytes_per_sec": 0 00:11:15.762 }, 00:11:15.762 "claimed": false, 00:11:15.762 "zoned": false, 00:11:15.762 "supported_io_types": { 00:11:15.762 "read": true, 00:11:15.762 "write": true, 00:11:15.762 "unmap": true, 00:11:15.762 "write_zeroes": true, 00:11:15.762 "flush": true, 00:11:15.762 "reset": true, 00:11:15.762 "compare": false, 00:11:15.762 "compare_and_write": false, 00:11:15.762 "abort": true, 00:11:15.762 "nvme_admin": false, 00:11:15.762 "nvme_io": false 00:11:15.762 }, 00:11:15.762 "memory_domains": [ 00:11:15.762 { 00:11:15.762 "dma_device_id": "system", 00:11:15.762 "dma_device_type": 1 00:11:15.762 }, 00:11:15.762 { 00:11:15.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.762 "dma_device_type": 2 00:11:15.762 } 00:11:15.762 ], 00:11:15.762 "driver_specific": { 00:11:15.762 "passthru": { 00:11:15.762 "name": "Passthru0", 00:11:15.762 "base_bdev_name": "Malloc2" 00:11:15.762 } 00:11:15.762 } 00:11:15.762 } 00:11:15.762 ]' 00:11:15.762 20:02:57 -- rpc/rpc.sh@21 -- # jq length 00:11:15.762 20:02:57 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:15.762 20:02:57 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:15.762 20:02:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.762 20:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:15.762 20:02:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.762 20:02:57 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:11:15.762 20:02:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.762 20:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:15.762 20:02:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.762 20:02:57 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:15.762 20:02:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:15.762 20:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:15.762 20:02:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:15.762 20:02:57 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:15.762 20:02:57 -- rpc/rpc.sh@26 -- # jq length 00:11:15.762 ************************************ 00:11:15.762 END TEST rpc_daemon_integrity 00:11:15.762 ************************************ 00:11:15.762 20:02:57 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:15.762 00:11:15.762 real 0m0.319s 00:11:15.762 user 0m0.201s 00:11:15.762 sys 0m0.047s 00:11:15.762 20:02:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:15.762 20:02:57 -- common/autotest_common.sh@10 -- # set +x 00:11:15.762 20:02:57 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:11:15.762 20:02:57 -- rpc/rpc.sh@84 -- # killprocess 58427 00:11:15.762 20:02:57 -- common/autotest_common.sh@936 -- # '[' -z 58427 ']' 00:11:15.762 20:02:57 -- common/autotest_common.sh@940 -- # kill -0 58427 00:11:15.762 20:02:57 -- common/autotest_common.sh@941 -- # uname 00:11:15.762 20:02:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:15.762 20:02:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58427 00:11:15.762 killing process with pid 58427 00:11:15.762 20:02:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:15.762 20:02:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:15.762 20:02:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58427' 00:11:15.762 20:02:57 -- common/autotest_common.sh@955 -- # kill 58427 00:11:15.762 20:02:57 -- common/autotest_common.sh@960 -- # wait 58427 00:11:16.331 00:11:16.331 real 0m3.106s 00:11:16.331 user 0m3.992s 00:11:16.331 sys 0m0.870s 00:11:16.331 ************************************ 00:11:16.331 END TEST rpc 00:11:16.331 ************************************ 00:11:16.331 20:02:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:16.331 20:02:58 -- common/autotest_common.sh@10 -- # set +x 00:11:16.331 20:02:58 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:16.331 20:02:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:16.331 20:02:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.331 20:02:58 -- common/autotest_common.sh@10 -- # set +x 00:11:16.331 ************************************ 00:11:16.331 START TEST skip_rpc 00:11:16.331 ************************************ 00:11:16.331 20:02:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:16.590 * Looking for test storage... 00:11:16.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:16.590 20:02:58 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:16.590 20:02:58 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:16.590 20:02:58 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:11:16.590 20:02:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:16.590 20:02:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.590 20:02:58 -- common/autotest_common.sh@10 -- # set +x 00:11:16.590 ************************************ 00:11:16.590 START TEST skip_rpc 00:11:16.590 ************************************ 00:11:16.590 20:02:58 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:11:16.590 20:02:58 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58657 00:11:16.590 20:02:58 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:11:16.590 20:02:58 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:16.590 20:02:58 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:11:16.590 [2024-04-24 20:02:58.784604] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:16.590 [2024-04-24 20:02:58.784768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58657 ] 00:11:16.849 [2024-04-24 20:02:58.925422] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.849 [2024-04-24 20:02:59.029910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.137 20:03:03 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:11:22.137 20:03:03 -- common/autotest_common.sh@638 -- # local es=0 00:11:22.137 20:03:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:11:22.137 20:03:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:11:22.137 20:03:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:22.137 20:03:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:11:22.137 20:03:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:22.137 20:03:03 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:11:22.137 20:03:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:22.137 20:03:03 -- common/autotest_common.sh@10 -- # set +x 00:11:22.137 20:03:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:11:22.137 20:03:03 -- common/autotest_common.sh@641 -- # es=1 00:11:22.137 20:03:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:22.137 20:03:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:22.137 20:03:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:22.137 20:03:03 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:11:22.137 20:03:03 -- rpc/skip_rpc.sh@23 -- # killprocess 58657 00:11:22.137 20:03:03 -- common/autotest_common.sh@936 -- # '[' -z 58657 ']' 00:11:22.137 20:03:03 -- common/autotest_common.sh@940 -- # kill -0 58657 00:11:22.137 20:03:03 -- common/autotest_common.sh@941 -- # uname 00:11:22.137 20:03:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:22.137 20:03:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58657 00:11:22.137 20:03:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:22.137 20:03:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:22.137 20:03:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58657' 00:11:22.137 killing process with pid 58657 00:11:22.137 20:03:03 -- common/autotest_common.sh@955 -- # kill 58657 00:11:22.137 20:03:03 -- common/autotest_common.sh@960 -- # wait 58657 00:11:22.137 ************************************ 00:11:22.137 END TEST skip_rpc 00:11:22.137 ************************************ 00:11:22.137 00:11:22.137 real 0m5.418s 00:11:22.137 user 0m5.093s 00:11:22.137 sys 0m0.247s 00:11:22.137 20:03:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:22.137 20:03:04 -- common/autotest_common.sh@10 -- # set +x 00:11:22.137 20:03:04 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:11:22.137 20:03:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.137 20:03:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.137 20:03:04 -- common/autotest_common.sh@10 -- # set +x 00:11:22.137 ************************************ 00:11:22.137 START TEST skip_rpc_with_json 00:11:22.137 ************************************ 00:11:22.137 20:03:04 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:11:22.137 20:03:04 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:11:22.137 20:03:04 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58747 00:11:22.137 20:03:04 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:22.137 20:03:04 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:22.137 20:03:04 -- rpc/skip_rpc.sh@31 -- # waitforlisten 58747 00:11:22.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.137 20:03:04 -- common/autotest_common.sh@817 -- # '[' -z 58747 ']' 00:11:22.137 20:03:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.137 20:03:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:22.137 20:03:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.137 20:03:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:22.137 20:03:04 -- common/autotest_common.sh@10 -- # set +x 00:11:22.137 [2024-04-24 20:03:04.316968] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:22.137 [2024-04-24 20:03:04.317083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58747 ] 00:11:22.395 [2024-04-24 20:03:04.462863] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.395 [2024-04-24 20:03:04.569096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.333 20:03:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:23.333 20:03:05 -- common/autotest_common.sh@850 -- # return 0 00:11:23.333 20:03:05 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:11:23.333 20:03:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.333 20:03:05 -- common/autotest_common.sh@10 -- # set +x 00:11:23.333 [2024-04-24 20:03:05.268686] nvmf_rpc.c:2517:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:11:23.333 request: 00:11:23.333 { 00:11:23.333 "trtype": "tcp", 00:11:23.333 "method": "nvmf_get_transports", 00:11:23.333 "req_id": 1 00:11:23.333 } 00:11:23.333 Got JSON-RPC error response 00:11:23.333 response: 00:11:23.333 { 00:11:23.333 "code": -19, 00:11:23.333 "message": "No such device" 00:11:23.333 } 00:11:23.333 20:03:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:11:23.333 20:03:05 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:11:23.333 20:03:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.333 20:03:05 -- common/autotest_common.sh@10 -- # set +x 00:11:23.333 [2024-04-24 20:03:05.276773] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.333 20:03:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:23.333 20:03:05 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:11:23.333 20:03:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.333 20:03:05 -- common/autotest_common.sh@10 -- # set +x 00:11:23.333 20:03:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:23.333 20:03:05 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:23.333 { 00:11:23.333 "subsystems": [ 00:11:23.333 { 00:11:23.333 "subsystem": "keyring", 00:11:23.333 "config": [] 00:11:23.333 }, 00:11:23.333 { 00:11:23.333 "subsystem": "iobuf", 00:11:23.334 "config": [ 00:11:23.334 { 00:11:23.334 "method": "iobuf_set_options", 00:11:23.334 "params": { 00:11:23.334 "small_pool_count": 8192, 00:11:23.334 "large_pool_count": 1024, 00:11:23.334 "small_bufsize": 8192, 00:11:23.334 "large_bufsize": 135168 00:11:23.334 } 00:11:23.334 } 00:11:23.334 ] 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "subsystem": "sock", 00:11:23.334 "config": [ 00:11:23.334 { 00:11:23.334 "method": "sock_impl_set_options", 00:11:23.334 "params": { 00:11:23.334 "impl_name": "uring", 00:11:23.334 "recv_buf_size": 2097152, 00:11:23.334 "send_buf_size": 2097152, 00:11:23.334 "enable_recv_pipe": true, 00:11:23.334 "enable_quickack": false, 00:11:23.334 "enable_placement_id": 0, 00:11:23.334 "enable_zerocopy_send_server": false, 00:11:23.334 "enable_zerocopy_send_client": false, 00:11:23.334 "zerocopy_threshold": 0, 00:11:23.334 "tls_version": 0, 00:11:23.334 "enable_ktls": false 00:11:23.334 } 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "method": "sock_impl_set_options", 00:11:23.334 "params": { 00:11:23.334 "impl_name": "posix", 00:11:23.334 "recv_buf_size": 2097152, 00:11:23.334 "send_buf_size": 2097152, 00:11:23.334 "enable_recv_pipe": true, 00:11:23.334 "enable_quickack": false, 00:11:23.334 "enable_placement_id": 0, 00:11:23.334 "enable_zerocopy_send_server": true, 00:11:23.334 "enable_zerocopy_send_client": false, 00:11:23.334 "zerocopy_threshold": 0, 00:11:23.334 "tls_version": 0, 00:11:23.334 "enable_ktls": false 00:11:23.334 } 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "method": "sock_impl_set_options", 00:11:23.334 "params": { 00:11:23.334 "impl_name": "ssl", 00:11:23.334 "recv_buf_size": 4096, 00:11:23.334 "send_buf_size": 4096, 00:11:23.334 "enable_recv_pipe": true, 00:11:23.334 "enable_quickack": false, 00:11:23.334 "enable_placement_id": 0, 00:11:23.334 "enable_zerocopy_send_server": true, 00:11:23.334 "enable_zerocopy_send_client": false, 00:11:23.334 "zerocopy_threshold": 0, 00:11:23.334 "tls_version": 0, 00:11:23.334 "enable_ktls": false 00:11:23.334 } 00:11:23.334 } 00:11:23.334 ] 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "subsystem": "vmd", 00:11:23.334 "config": [] 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "subsystem": "accel", 00:11:23.334 "config": [ 00:11:23.334 { 00:11:23.334 "method": "accel_set_options", 00:11:23.334 "params": { 00:11:23.334 "small_cache_size": 128, 00:11:23.334 "large_cache_size": 16, 00:11:23.334 "task_count": 2048, 00:11:23.334 "sequence_count": 2048, 00:11:23.334 "buf_count": 2048 00:11:23.334 } 00:11:23.334 } 00:11:23.334 ] 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "subsystem": "bdev", 00:11:23.334 "config": [ 00:11:23.334 { 00:11:23.334 "method": "bdev_set_options", 00:11:23.334 "params": { 00:11:23.334 "bdev_io_pool_size": 65535, 00:11:23.334 "bdev_io_cache_size": 256, 00:11:23.334 "bdev_auto_examine": true, 00:11:23.334 "iobuf_small_cache_size": 128, 00:11:23.334 "iobuf_large_cache_size": 16 00:11:23.334 } 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "method": "bdev_raid_set_options", 00:11:23.334 "params": { 00:11:23.334 "process_window_size_kb": 1024 00:11:23.334 } 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "method": "bdev_iscsi_set_options", 00:11:23.334 "params": { 00:11:23.334 "timeout_sec": 30 00:11:23.334 } 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "method": "bdev_nvme_set_options", 00:11:23.334 "params": { 00:11:23.334 "action_on_timeout": "none", 00:11:23.334 "timeout_us": 0, 00:11:23.334 "timeout_admin_us": 0, 00:11:23.334 "keep_alive_timeout_ms": 10000, 00:11:23.334 "arbitration_burst": 0, 00:11:23.334 "low_priority_weight": 0, 00:11:23.334 "medium_priority_weight": 0, 00:11:23.334 "high_priority_weight": 0, 00:11:23.334 "nvme_adminq_poll_period_us": 10000, 00:11:23.334 "nvme_ioq_poll_period_us": 0, 00:11:23.334 "io_queue_requests": 0, 00:11:23.334 "delay_cmd_submit": true, 00:11:23.334 "transport_retry_count": 4, 00:11:23.334 "bdev_retry_count": 3, 00:11:23.334 "transport_ack_timeout": 0, 00:11:23.334 "ctrlr_loss_timeout_sec": 0, 00:11:23.334 "reconnect_delay_sec": 0, 00:11:23.334 "fast_io_fail_timeout_sec": 0, 00:11:23.334 "disable_auto_failback": false, 00:11:23.334 "generate_uuids": false, 00:11:23.334 "transport_tos": 0, 00:11:23.334 "nvme_error_stat": false, 00:11:23.334 "rdma_srq_size": 0, 00:11:23.334 "io_path_stat": false, 00:11:23.334 "allow_accel_sequence": false, 00:11:23.334 "rdma_max_cq_size": 0, 00:11:23.334 "rdma_cm_event_timeout_ms": 0, 00:11:23.334 "dhchap_digests": [ 00:11:23.334 "sha256", 00:11:23.334 "sha384", 00:11:23.334 "sha512" 00:11:23.334 ], 00:11:23.334 "dhchap_dhgroups": [ 00:11:23.334 "null", 00:11:23.334 "ffdhe2048", 00:11:23.334 "ffdhe3072", 00:11:23.334 "ffdhe4096", 00:11:23.334 "ffdhe6144", 00:11:23.334 "ffdhe8192" 00:11:23.334 ] 00:11:23.334 } 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "method": "bdev_nvme_set_hotplug", 00:11:23.334 "params": { 00:11:23.334 "period_us": 100000, 00:11:23.334 "enable": false 00:11:23.334 } 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "method": "bdev_wait_for_examine" 00:11:23.334 } 00:11:23.334 ] 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "subsystem": "scsi", 00:11:23.334 "config": null 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "subsystem": "scheduler", 00:11:23.334 "config": [ 00:11:23.334 { 00:11:23.334 "method": "framework_set_scheduler", 00:11:23.334 "params": { 00:11:23.334 "name": "static" 00:11:23.334 } 00:11:23.334 } 00:11:23.334 ] 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "subsystem": "vhost_scsi", 00:11:23.334 "config": [] 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "subsystem": "vhost_blk", 00:11:23.334 "config": [] 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "subsystem": "ublk", 00:11:23.334 "config": [] 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "subsystem": "nbd", 00:11:23.334 "config": [] 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "subsystem": "nvmf", 00:11:23.334 "config": [ 00:11:23.334 { 00:11:23.334 "method": "nvmf_set_config", 00:11:23.334 "params": { 00:11:23.334 "discovery_filter": "match_any", 00:11:23.334 "admin_cmd_passthru": { 00:11:23.334 "identify_ctrlr": false 00:11:23.334 } 00:11:23.334 } 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "method": "nvmf_set_max_subsystems", 00:11:23.334 "params": { 00:11:23.334 "max_subsystems": 1024 00:11:23.334 } 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "method": "nvmf_set_crdt", 00:11:23.334 "params": { 00:11:23.334 "crdt1": 0, 00:11:23.334 "crdt2": 0, 00:11:23.334 "crdt3": 0 00:11:23.334 } 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "method": "nvmf_create_transport", 00:11:23.334 "params": { 00:11:23.334 "trtype": "TCP", 00:11:23.334 "max_queue_depth": 128, 00:11:23.334 "max_io_qpairs_per_ctrlr": 127, 00:11:23.334 "in_capsule_data_size": 4096, 00:11:23.334 "max_io_size": 131072, 00:11:23.334 "io_unit_size": 131072, 00:11:23.334 "max_aq_depth": 128, 00:11:23.334 "num_shared_buffers": 511, 00:11:23.334 "buf_cache_size": 4294967295, 00:11:23.334 "dif_insert_or_strip": false, 00:11:23.334 "zcopy": false, 00:11:23.334 "c2h_success": true, 00:11:23.334 "sock_priority": 0, 00:11:23.334 "abort_timeout_sec": 1, 00:11:23.334 "ack_timeout": 0 00:11:23.334 } 00:11:23.334 } 00:11:23.334 ] 00:11:23.334 }, 00:11:23.334 { 00:11:23.334 "subsystem": "iscsi", 00:11:23.334 "config": [ 00:11:23.334 { 00:11:23.334 "method": "iscsi_set_options", 00:11:23.334 "params": { 00:11:23.334 "node_base": "iqn.2016-06.io.spdk", 00:11:23.334 "max_sessions": 128, 00:11:23.334 "max_connections_per_session": 2, 00:11:23.334 "max_queue_depth": 64, 00:11:23.334 "default_time2wait": 2, 00:11:23.334 "default_time2retain": 20, 00:11:23.335 "first_burst_length": 8192, 00:11:23.335 "immediate_data": true, 00:11:23.335 "allow_duplicated_isid": false, 00:11:23.335 "error_recovery_level": 0, 00:11:23.335 "nop_timeout": 60, 00:11:23.335 "nop_in_interval": 30, 00:11:23.335 "disable_chap": false, 00:11:23.335 "require_chap": false, 00:11:23.335 "mutual_chap": false, 00:11:23.335 "chap_group": 0, 00:11:23.335 "max_large_datain_per_connection": 64, 00:11:23.335 "max_r2t_per_connection": 4, 00:11:23.335 "pdu_pool_size": 36864, 00:11:23.335 "immediate_data_pool_size": 16384, 00:11:23.335 "data_out_pool_size": 2048 00:11:23.335 } 00:11:23.335 } 00:11:23.335 ] 00:11:23.335 } 00:11:23.335 ] 00:11:23.335 } 00:11:23.335 20:03:05 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:23.335 20:03:05 -- rpc/skip_rpc.sh@40 -- # killprocess 58747 00:11:23.335 20:03:05 -- common/autotest_common.sh@936 -- # '[' -z 58747 ']' 00:11:23.335 20:03:05 -- common/autotest_common.sh@940 -- # kill -0 58747 00:11:23.335 20:03:05 -- common/autotest_common.sh@941 -- # uname 00:11:23.335 20:03:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:23.335 20:03:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58747 00:11:23.335 killing process with pid 58747 00:11:23.335 20:03:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:23.335 20:03:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:23.335 20:03:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58747' 00:11:23.335 20:03:05 -- common/autotest_common.sh@955 -- # kill 58747 00:11:23.335 20:03:05 -- common/autotest_common.sh@960 -- # wait 58747 00:11:23.594 20:03:05 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58775 00:11:23.594 20:03:05 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:23.594 20:03:05 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:11:28.868 20:03:10 -- rpc/skip_rpc.sh@50 -- # killprocess 58775 00:11:28.868 20:03:10 -- common/autotest_common.sh@936 -- # '[' -z 58775 ']' 00:11:28.868 20:03:10 -- common/autotest_common.sh@940 -- # kill -0 58775 00:11:28.868 20:03:10 -- common/autotest_common.sh@941 -- # uname 00:11:28.868 20:03:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:28.868 20:03:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58775 00:11:28.868 killing process with pid 58775 00:11:28.868 20:03:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:28.868 20:03:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:28.868 20:03:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58775' 00:11:28.868 20:03:10 -- common/autotest_common.sh@955 -- # kill 58775 00:11:28.868 20:03:10 -- common/autotest_common.sh@960 -- # wait 58775 00:11:29.127 20:03:11 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:29.127 20:03:11 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:29.127 ************************************ 00:11:29.127 END TEST skip_rpc_with_json 00:11:29.127 ************************************ 00:11:29.127 00:11:29.127 real 0m6.951s 00:11:29.127 user 0m6.749s 00:11:29.127 sys 0m0.534s 00:11:29.127 20:03:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:29.127 20:03:11 -- common/autotest_common.sh@10 -- # set +x 00:11:29.127 20:03:11 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:11:29.127 20:03:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:29.127 20:03:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:29.127 20:03:11 -- common/autotest_common.sh@10 -- # set +x 00:11:29.127 ************************************ 00:11:29.127 START TEST skip_rpc_with_delay 00:11:29.127 ************************************ 00:11:29.127 20:03:11 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:11:29.127 20:03:11 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:29.127 20:03:11 -- common/autotest_common.sh@638 -- # local es=0 00:11:29.127 20:03:11 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:29.127 20:03:11 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:29.127 20:03:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:29.127 20:03:11 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:29.127 20:03:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:29.127 20:03:11 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:29.127 20:03:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:29.127 20:03:11 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:29.127 20:03:11 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:29.127 20:03:11 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:29.127 [2024-04-24 20:03:11.373735] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:11:29.127 [2024-04-24 20:03:11.373860] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:11:29.385 20:03:11 -- common/autotest_common.sh@641 -- # es=1 00:11:29.385 20:03:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:29.385 20:03:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:29.385 ************************************ 00:11:29.385 END TEST skip_rpc_with_delay 00:11:29.385 ************************************ 00:11:29.385 20:03:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:29.385 00:11:29.385 real 0m0.077s 00:11:29.385 user 0m0.043s 00:11:29.385 sys 0m0.032s 00:11:29.385 20:03:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:29.385 20:03:11 -- common/autotest_common.sh@10 -- # set +x 00:11:29.385 20:03:11 -- rpc/skip_rpc.sh@77 -- # uname 00:11:29.385 20:03:11 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:11:29.385 20:03:11 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:11:29.385 20:03:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:29.385 20:03:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:29.385 20:03:11 -- common/autotest_common.sh@10 -- # set +x 00:11:29.385 ************************************ 00:11:29.385 START TEST exit_on_failed_rpc_init 00:11:29.385 ************************************ 00:11:29.385 20:03:11 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:11:29.385 20:03:11 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58894 00:11:29.385 20:03:11 -- rpc/skip_rpc.sh@63 -- # waitforlisten 58894 00:11:29.385 20:03:11 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:29.385 20:03:11 -- common/autotest_common.sh@817 -- # '[' -z 58894 ']' 00:11:29.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.385 20:03:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.385 20:03:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:29.385 20:03:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.385 20:03:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:29.385 20:03:11 -- common/autotest_common.sh@10 -- # set +x 00:11:29.385 [2024-04-24 20:03:11.572528] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:29.385 [2024-04-24 20:03:11.572601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58894 ] 00:11:29.643 [2024-04-24 20:03:11.711220] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.643 [2024-04-24 20:03:11.829083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.577 20:03:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:30.577 20:03:12 -- common/autotest_common.sh@850 -- # return 0 00:11:30.577 20:03:12 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:30.577 20:03:12 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:30.577 20:03:12 -- common/autotest_common.sh@638 -- # local es=0 00:11:30.577 20:03:12 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:30.577 20:03:12 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:30.577 20:03:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:30.577 20:03:12 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:30.577 20:03:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:30.577 20:03:12 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:30.577 20:03:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:30.577 20:03:12 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:30.577 20:03:12 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:30.577 20:03:12 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:30.577 [2024-04-24 20:03:12.520017] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:30.577 [2024-04-24 20:03:12.520092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58911 ] 00:11:30.577 [2024-04-24 20:03:12.660250] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.577 [2024-04-24 20:03:12.765184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.577 [2024-04-24 20:03:12.765574] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:30.577 [2024-04-24 20:03:12.765773] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:30.577 [2024-04-24 20:03:12.765877] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:30.836 20:03:12 -- common/autotest_common.sh@641 -- # es=234 00:11:30.836 20:03:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:30.836 20:03:12 -- common/autotest_common.sh@650 -- # es=106 00:11:30.836 20:03:12 -- common/autotest_common.sh@651 -- # case "$es" in 00:11:30.836 20:03:12 -- common/autotest_common.sh@658 -- # es=1 00:11:30.836 20:03:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:30.836 20:03:12 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:30.836 20:03:12 -- rpc/skip_rpc.sh@70 -- # killprocess 58894 00:11:30.836 20:03:12 -- common/autotest_common.sh@936 -- # '[' -z 58894 ']' 00:11:30.836 20:03:12 -- common/autotest_common.sh@940 -- # kill -0 58894 00:11:30.836 20:03:12 -- common/autotest_common.sh@941 -- # uname 00:11:30.836 20:03:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:30.836 20:03:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58894 00:11:30.836 killing process with pid 58894 00:11:30.836 20:03:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:30.836 20:03:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:30.836 20:03:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58894' 00:11:30.836 20:03:12 -- common/autotest_common.sh@955 -- # kill 58894 00:11:30.836 20:03:12 -- common/autotest_common.sh@960 -- # wait 58894 00:11:31.096 00:11:31.096 real 0m1.750s 00:11:31.096 user 0m2.038s 00:11:31.096 sys 0m0.363s 00:11:31.096 20:03:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:31.096 ************************************ 00:11:31.096 END TEST exit_on_failed_rpc_init 00:11:31.096 ************************************ 00:11:31.096 20:03:13 -- common/autotest_common.sh@10 -- # set +x 00:11:31.096 20:03:13 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:31.096 00:11:31.096 real 0m14.806s 00:11:31.096 user 0m14.146s 00:11:31.096 sys 0m1.534s 00:11:31.096 ************************************ 00:11:31.096 END TEST skip_rpc 00:11:31.096 ************************************ 00:11:31.096 20:03:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:31.096 20:03:13 -- common/autotest_common.sh@10 -- # set +x 00:11:31.354 20:03:13 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:31.354 20:03:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:31.354 20:03:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:31.355 20:03:13 -- common/autotest_common.sh@10 -- # set +x 00:11:31.355 ************************************ 00:11:31.355 START TEST rpc_client 00:11:31.355 ************************************ 00:11:31.355 20:03:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:31.355 * Looking for test storage... 00:11:31.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:11:31.355 20:03:13 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:11:31.355 OK 00:11:31.355 20:03:13 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:11:31.355 00:11:31.355 real 0m0.094s 00:11:31.355 user 0m0.045s 00:11:31.355 sys 0m0.055s 00:11:31.355 20:03:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:31.355 ************************************ 00:11:31.355 END TEST rpc_client 00:11:31.355 ************************************ 00:11:31.355 20:03:13 -- common/autotest_common.sh@10 -- # set +x 00:11:31.355 20:03:13 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:31.355 20:03:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:31.355 20:03:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:31.355 20:03:13 -- common/autotest_common.sh@10 -- # set +x 00:11:31.660 ************************************ 00:11:31.660 START TEST json_config 00:11:31.660 ************************************ 00:11:31.660 20:03:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:31.660 20:03:13 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:31.660 20:03:13 -- nvmf/common.sh@7 -- # uname -s 00:11:31.660 20:03:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.660 20:03:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.660 20:03:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.660 20:03:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.660 20:03:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.660 20:03:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.660 20:03:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.660 20:03:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.660 20:03:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.660 20:03:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.660 20:03:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:11:31.660 20:03:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:11:31.660 20:03:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.660 20:03:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.660 20:03:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:31.660 20:03:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.660 20:03:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:31.660 20:03:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.660 20:03:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.660 20:03:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.660 20:03:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.660 20:03:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.660 20:03:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.660 20:03:13 -- paths/export.sh@5 -- # export PATH 00:11:31.660 20:03:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.660 20:03:13 -- nvmf/common.sh@47 -- # : 0 00:11:31.660 20:03:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.660 20:03:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.660 20:03:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.660 20:03:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.660 20:03:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.660 20:03:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.660 20:03:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.660 20:03:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.660 20:03:13 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:31.660 20:03:13 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:11:31.660 20:03:13 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:11:31.660 20:03:13 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:11:31.660 20:03:13 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:11:31.660 20:03:13 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:11:31.660 20:03:13 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:11:31.660 20:03:13 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:11:31.660 20:03:13 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:11:31.660 20:03:13 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:11:31.660 20:03:13 -- json_config/json_config.sh@33 -- # declare -A app_params 00:11:31.660 INFO: JSON configuration test init 00:11:31.660 20:03:13 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:11:31.660 20:03:13 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:11:31.660 20:03:13 -- json_config/json_config.sh@40 -- # last_event_id=0 00:11:31.660 20:03:13 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:31.660 20:03:13 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:11:31.660 20:03:13 -- json_config/json_config.sh@357 -- # json_config_test_init 00:11:31.660 20:03:13 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:11:31.660 20:03:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:31.660 20:03:13 -- common/autotest_common.sh@10 -- # set +x 00:11:31.660 20:03:13 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:11:31.660 20:03:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:31.660 20:03:13 -- common/autotest_common.sh@10 -- # set +x 00:11:31.660 20:03:13 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:11:31.660 20:03:13 -- json_config/common.sh@9 -- # local app=target 00:11:31.660 20:03:13 -- json_config/common.sh@10 -- # shift 00:11:31.660 20:03:13 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:31.660 20:03:13 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:31.660 20:03:13 -- json_config/common.sh@15 -- # local app_extra_params= 00:11:31.660 Waiting for target to run... 00:11:31.660 20:03:13 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:31.660 20:03:13 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:31.660 20:03:13 -- json_config/common.sh@22 -- # app_pid["$app"]=59040 00:11:31.660 20:03:13 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:31.660 20:03:13 -- json_config/common.sh@25 -- # waitforlisten 59040 /var/tmp/spdk_tgt.sock 00:11:31.660 20:03:13 -- common/autotest_common.sh@817 -- # '[' -z 59040 ']' 00:11:31.660 20:03:13 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:11:31.660 20:03:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:31.660 20:03:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:31.660 20:03:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:31.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:31.660 20:03:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:31.660 20:03:13 -- common/autotest_common.sh@10 -- # set +x 00:11:31.660 [2024-04-24 20:03:13.809095] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:31.660 [2024-04-24 20:03:13.809266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59040 ] 00:11:31.918 [2024-04-24 20:03:14.164311] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.177 [2024-04-24 20:03:14.248412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.745 20:03:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:32.745 20:03:14 -- common/autotest_common.sh@850 -- # return 0 00:11:32.745 20:03:14 -- json_config/common.sh@26 -- # echo '' 00:11:32.745 00:11:32.745 20:03:14 -- json_config/json_config.sh@269 -- # create_accel_config 00:11:32.745 20:03:14 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:11:32.745 20:03:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:32.745 20:03:14 -- common/autotest_common.sh@10 -- # set +x 00:11:32.745 20:03:14 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:11:32.745 20:03:14 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:11:32.745 20:03:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:32.745 20:03:14 -- common/autotest_common.sh@10 -- # set +x 00:11:32.745 20:03:14 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:11:32.745 20:03:14 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:11:32.745 20:03:14 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:11:33.003 20:03:15 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:11:33.003 20:03:15 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:11:33.003 20:03:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:33.003 20:03:15 -- common/autotest_common.sh@10 -- # set +x 00:11:33.003 20:03:15 -- json_config/json_config.sh@45 -- # local ret=0 00:11:33.003 20:03:15 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:11:33.003 20:03:15 -- json_config/json_config.sh@46 -- # local enabled_types 00:11:33.003 20:03:15 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:11:33.004 20:03:15 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:11:33.004 20:03:15 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:11:33.263 20:03:15 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:11:33.263 20:03:15 -- json_config/json_config.sh@48 -- # local get_types 00:11:33.263 20:03:15 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:11:33.263 20:03:15 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:11:33.263 20:03:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:33.263 20:03:15 -- common/autotest_common.sh@10 -- # set +x 00:11:33.263 20:03:15 -- json_config/json_config.sh@55 -- # return 0 00:11:33.263 20:03:15 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:11:33.263 20:03:15 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:11:33.263 20:03:15 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:11:33.263 20:03:15 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:11:33.263 20:03:15 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:11:33.263 20:03:15 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:11:33.263 20:03:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:33.263 20:03:15 -- common/autotest_common.sh@10 -- # set +x 00:11:33.263 20:03:15 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:11:33.263 20:03:15 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:11:33.263 20:03:15 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:11:33.263 20:03:15 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:11:33.263 20:03:15 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:11:33.523 MallocForNvmf0 00:11:33.523 20:03:15 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:11:33.523 20:03:15 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:11:33.782 MallocForNvmf1 00:11:33.782 20:03:15 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:11:33.782 20:03:15 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:11:34.041 [2024-04-24 20:03:16.248025] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.041 20:03:16 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:34.041 20:03:16 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:34.300 20:03:16 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:11:34.300 20:03:16 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:11:34.558 20:03:16 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:11:34.558 20:03:16 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:11:34.817 20:03:16 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:11:34.817 20:03:16 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:11:35.103 [2024-04-24 20:03:17.142615] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:35.103 [2024-04-24 20:03:17.142882] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:11:35.103 20:03:17 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:11:35.103 20:03:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:35.103 20:03:17 -- common/autotest_common.sh@10 -- # set +x 00:11:35.103 20:03:17 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:11:35.103 20:03:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:35.103 20:03:17 -- common/autotest_common.sh@10 -- # set +x 00:11:35.103 20:03:17 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:11:35.103 20:03:17 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:35.103 20:03:17 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:35.366 MallocBdevForConfigChangeCheck 00:11:35.366 20:03:17 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:11:35.366 20:03:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:35.366 20:03:17 -- common/autotest_common.sh@10 -- # set +x 00:11:35.366 20:03:17 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:11:35.366 20:03:17 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:35.625 INFO: shutting down applications... 00:11:35.625 20:03:17 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:11:35.625 20:03:17 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:11:35.625 20:03:17 -- json_config/json_config.sh@368 -- # json_config_clear target 00:11:35.625 20:03:17 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:11:35.625 20:03:17 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:11:36.193 Calling clear_iscsi_subsystem 00:11:36.193 Calling clear_nvmf_subsystem 00:11:36.193 Calling clear_nbd_subsystem 00:11:36.193 Calling clear_ublk_subsystem 00:11:36.193 Calling clear_vhost_blk_subsystem 00:11:36.193 Calling clear_vhost_scsi_subsystem 00:11:36.193 Calling clear_bdev_subsystem 00:11:36.193 20:03:18 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:11:36.193 20:03:18 -- json_config/json_config.sh@343 -- # count=100 00:11:36.193 20:03:18 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:11:36.193 20:03:18 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:36.193 20:03:18 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:11:36.193 20:03:18 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:11:36.453 20:03:18 -- json_config/json_config.sh@345 -- # break 00:11:36.453 20:03:18 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:11:36.453 20:03:18 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:11:36.453 20:03:18 -- json_config/common.sh@31 -- # local app=target 00:11:36.453 20:03:18 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:36.453 20:03:18 -- json_config/common.sh@35 -- # [[ -n 59040 ]] 00:11:36.453 20:03:18 -- json_config/common.sh@38 -- # kill -SIGINT 59040 00:11:36.453 [2024-04-24 20:03:18.599884] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:36.453 20:03:18 -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:36.453 20:03:18 -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:36.453 20:03:18 -- json_config/common.sh@41 -- # kill -0 59040 00:11:36.454 20:03:18 -- json_config/common.sh@45 -- # sleep 0.5 00:11:37.022 20:03:19 -- json_config/common.sh@40 -- # (( i++ )) 00:11:37.022 20:03:19 -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:37.022 20:03:19 -- json_config/common.sh@41 -- # kill -0 59040 00:11:37.022 20:03:19 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:37.022 20:03:19 -- json_config/common.sh@43 -- # break 00:11:37.022 20:03:19 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:37.022 20:03:19 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:37.022 SPDK target shutdown done 00:11:37.022 20:03:19 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:11:37.022 INFO: relaunching applications... 00:11:37.022 20:03:19 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:37.022 20:03:19 -- json_config/common.sh@9 -- # local app=target 00:11:37.022 20:03:19 -- json_config/common.sh@10 -- # shift 00:11:37.022 20:03:19 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:37.022 20:03:19 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:37.022 20:03:19 -- json_config/common.sh@15 -- # local app_extra_params= 00:11:37.022 20:03:19 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:37.022 20:03:19 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:37.022 20:03:19 -- json_config/common.sh@22 -- # app_pid["$app"]=59231 00:11:37.022 20:03:19 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:37.022 Waiting for target to run... 00:11:37.022 20:03:19 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:37.022 20:03:19 -- json_config/common.sh@25 -- # waitforlisten 59231 /var/tmp/spdk_tgt.sock 00:11:37.022 20:03:19 -- common/autotest_common.sh@817 -- # '[' -z 59231 ']' 00:11:37.022 20:03:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:37.022 20:03:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:37.022 20:03:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:37.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:37.022 20:03:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:37.022 20:03:19 -- common/autotest_common.sh@10 -- # set +x 00:11:37.022 [2024-04-24 20:03:19.168196] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:37.022 [2024-04-24 20:03:19.168382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59231 ] 00:11:37.282 [2024-04-24 20:03:19.530373] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.541 [2024-04-24 20:03:19.614049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.801 [2024-04-24 20:03:19.925625] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.801 [2024-04-24 20:03:19.957476] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:37.801 [2024-04-24 20:03:19.957683] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:11:38.059 00:11:38.059 INFO: Checking if target configuration is the same... 00:11:38.059 20:03:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:38.059 20:03:20 -- common/autotest_common.sh@850 -- # return 0 00:11:38.059 20:03:20 -- json_config/common.sh@26 -- # echo '' 00:11:38.059 20:03:20 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:11:38.059 20:03:20 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:11:38.059 20:03:20 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:38.059 20:03:20 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:11:38.059 20:03:20 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:38.059 + '[' 2 -ne 2 ']' 00:11:38.059 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:38.059 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:38.059 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:38.059 +++ basename /dev/fd/62 00:11:38.059 ++ mktemp /tmp/62.XXX 00:11:38.059 + tmp_file_1=/tmp/62.rO0 00:11:38.059 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:38.059 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:38.059 + tmp_file_2=/tmp/spdk_tgt_config.json.486 00:11:38.059 + ret=0 00:11:38.059 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:38.318 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:38.318 + diff -u /tmp/62.rO0 /tmp/spdk_tgt_config.json.486 00:11:38.318 + echo 'INFO: JSON config files are the same' 00:11:38.318 INFO: JSON config files are the same 00:11:38.318 + rm /tmp/62.rO0 /tmp/spdk_tgt_config.json.486 00:11:38.318 + exit 0 00:11:38.318 20:03:20 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:11:38.318 20:03:20 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:11:38.318 INFO: changing configuration and checking if this can be detected... 00:11:38.318 20:03:20 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:38.318 20:03:20 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:38.577 20:03:20 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:38.577 20:03:20 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:11:38.577 20:03:20 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:38.577 + '[' 2 -ne 2 ']' 00:11:38.577 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:38.577 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:38.577 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:38.577 +++ basename /dev/fd/62 00:11:38.577 ++ mktemp /tmp/62.XXX 00:11:38.577 + tmp_file_1=/tmp/62.pAh 00:11:38.577 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:38.577 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:38.577 + tmp_file_2=/tmp/spdk_tgt_config.json.Wwh 00:11:38.577 + ret=0 00:11:38.577 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:39.204 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:39.204 + diff -u /tmp/62.pAh /tmp/spdk_tgt_config.json.Wwh 00:11:39.204 + ret=1 00:11:39.204 + echo '=== Start of file: /tmp/62.pAh ===' 00:11:39.204 + cat /tmp/62.pAh 00:11:39.204 + echo '=== End of file: /tmp/62.pAh ===' 00:11:39.204 + echo '' 00:11:39.204 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Wwh ===' 00:11:39.204 + cat /tmp/spdk_tgt_config.json.Wwh 00:11:39.204 + echo '=== End of file: /tmp/spdk_tgt_config.json.Wwh ===' 00:11:39.204 + echo '' 00:11:39.204 + rm /tmp/62.pAh /tmp/spdk_tgt_config.json.Wwh 00:11:39.204 + exit 1 00:11:39.204 INFO: configuration change detected. 00:11:39.204 20:03:21 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:11:39.204 20:03:21 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:11:39.204 20:03:21 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:11:39.204 20:03:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:39.204 20:03:21 -- common/autotest_common.sh@10 -- # set +x 00:11:39.204 20:03:21 -- json_config/json_config.sh@307 -- # local ret=0 00:11:39.204 20:03:21 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:11:39.204 20:03:21 -- json_config/json_config.sh@317 -- # [[ -n 59231 ]] 00:11:39.204 20:03:21 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:11:39.204 20:03:21 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:11:39.204 20:03:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:39.204 20:03:21 -- common/autotest_common.sh@10 -- # set +x 00:11:39.204 20:03:21 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:11:39.204 20:03:21 -- json_config/json_config.sh@193 -- # uname -s 00:11:39.204 20:03:21 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:11:39.204 20:03:21 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:11:39.204 20:03:21 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:11:39.204 20:03:21 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:11:39.204 20:03:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:39.204 20:03:21 -- common/autotest_common.sh@10 -- # set +x 00:11:39.204 20:03:21 -- json_config/json_config.sh@323 -- # killprocess 59231 00:11:39.204 20:03:21 -- common/autotest_common.sh@936 -- # '[' -z 59231 ']' 00:11:39.204 20:03:21 -- common/autotest_common.sh@940 -- # kill -0 59231 00:11:39.204 20:03:21 -- common/autotest_common.sh@941 -- # uname 00:11:39.204 20:03:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:39.204 20:03:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59231 00:11:39.204 20:03:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:39.204 20:03:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:39.204 20:03:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59231' 00:11:39.204 killing process with pid 59231 00:11:39.204 20:03:21 -- common/autotest_common.sh@955 -- # kill 59231 00:11:39.204 [2024-04-24 20:03:21.369833] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:39.204 20:03:21 -- common/autotest_common.sh@960 -- # wait 59231 00:11:39.463 20:03:21 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:39.463 20:03:21 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:11:39.463 20:03:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:39.463 20:03:21 -- common/autotest_common.sh@10 -- # set +x 00:11:39.463 INFO: Success 00:11:39.463 20:03:21 -- json_config/json_config.sh@328 -- # return 0 00:11:39.463 20:03:21 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:11:39.463 ************************************ 00:11:39.463 END TEST json_config 00:11:39.463 ************************************ 00:11:39.463 00:11:39.463 real 0m8.019s 00:11:39.463 user 0m11.539s 00:11:39.463 sys 0m1.541s 00:11:39.463 20:03:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:39.463 20:03:21 -- common/autotest_common.sh@10 -- # set +x 00:11:39.463 20:03:21 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:39.463 20:03:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:39.463 20:03:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:39.463 20:03:21 -- common/autotest_common.sh@10 -- # set +x 00:11:39.721 ************************************ 00:11:39.721 START TEST json_config_extra_key 00:11:39.721 ************************************ 00:11:39.721 20:03:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:39.721 20:03:21 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:39.721 20:03:21 -- nvmf/common.sh@7 -- # uname -s 00:11:39.721 20:03:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.721 20:03:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.721 20:03:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.721 20:03:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.721 20:03:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.721 20:03:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.721 20:03:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.721 20:03:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.721 20:03:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.721 20:03:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.721 20:03:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:11:39.721 20:03:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:11:39.721 20:03:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.721 20:03:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.721 20:03:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:39.721 20:03:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.721 20:03:21 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:39.721 20:03:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.721 20:03:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.721 20:03:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.721 20:03:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.721 20:03:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.721 20:03:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.721 20:03:21 -- paths/export.sh@5 -- # export PATH 00:11:39.721 20:03:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.721 20:03:21 -- nvmf/common.sh@47 -- # : 0 00:11:39.721 20:03:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:39.721 20:03:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:39.721 20:03:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.721 20:03:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.721 20:03:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.721 20:03:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:39.721 20:03:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:39.721 20:03:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:39.721 20:03:21 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:39.721 20:03:21 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:11:39.721 20:03:21 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:11:39.721 20:03:21 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:39.721 20:03:21 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:11:39.721 20:03:21 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:39.721 20:03:21 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:11:39.721 20:03:21 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:11:39.721 20:03:21 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:11:39.721 20:03:21 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:39.721 20:03:21 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:11:39.721 INFO: launching applications... 00:11:39.721 20:03:21 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:39.721 20:03:21 -- json_config/common.sh@9 -- # local app=target 00:11:39.721 20:03:21 -- json_config/common.sh@10 -- # shift 00:11:39.721 20:03:21 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:39.721 20:03:21 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:39.721 20:03:21 -- json_config/common.sh@15 -- # local app_extra_params= 00:11:39.721 20:03:21 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:39.721 20:03:21 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:39.721 Waiting for target to run... 00:11:39.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:39.721 20:03:21 -- json_config/common.sh@22 -- # app_pid["$app"]=59375 00:11:39.721 20:03:21 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:39.721 20:03:21 -- json_config/common.sh@25 -- # waitforlisten 59375 /var/tmp/spdk_tgt.sock 00:11:39.721 20:03:21 -- common/autotest_common.sh@817 -- # '[' -z 59375 ']' 00:11:39.721 20:03:21 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:39.721 20:03:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:39.721 20:03:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:39.721 20:03:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:39.721 20:03:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:39.721 20:03:21 -- common/autotest_common.sh@10 -- # set +x 00:11:39.721 [2024-04-24 20:03:21.970868] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:39.721 [2024-04-24 20:03:21.970956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59375 ] 00:11:40.288 [2024-04-24 20:03:22.327612] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.288 [2024-04-24 20:03:22.410890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.854 00:11:40.854 INFO: shutting down applications... 00:11:40.854 20:03:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:40.854 20:03:22 -- common/autotest_common.sh@850 -- # return 0 00:11:40.854 20:03:22 -- json_config/common.sh@26 -- # echo '' 00:11:40.855 20:03:22 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:11:40.855 20:03:22 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:11:40.855 20:03:22 -- json_config/common.sh@31 -- # local app=target 00:11:40.855 20:03:22 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:40.855 20:03:22 -- json_config/common.sh@35 -- # [[ -n 59375 ]] 00:11:40.855 20:03:22 -- json_config/common.sh@38 -- # kill -SIGINT 59375 00:11:40.855 20:03:22 -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:40.855 20:03:22 -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:40.855 20:03:22 -- json_config/common.sh@41 -- # kill -0 59375 00:11:40.855 20:03:22 -- json_config/common.sh@45 -- # sleep 0.5 00:11:41.422 20:03:23 -- json_config/common.sh@40 -- # (( i++ )) 00:11:41.422 20:03:23 -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:41.422 20:03:23 -- json_config/common.sh@41 -- # kill -0 59375 00:11:41.422 20:03:23 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:41.422 20:03:23 -- json_config/common.sh@43 -- # break 00:11:41.422 20:03:23 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:41.422 20:03:23 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:41.422 SPDK target shutdown done 00:11:41.422 20:03:23 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:41.422 Success 00:11:41.422 00:11:41.422 real 0m1.603s 00:11:41.422 user 0m1.446s 00:11:41.422 sys 0m0.380s 00:11:41.422 20:03:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:41.422 20:03:23 -- common/autotest_common.sh@10 -- # set +x 00:11:41.422 ************************************ 00:11:41.422 END TEST json_config_extra_key 00:11:41.422 ************************************ 00:11:41.422 20:03:23 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:41.422 20:03:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:41.422 20:03:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:41.422 20:03:23 -- common/autotest_common.sh@10 -- # set +x 00:11:41.422 ************************************ 00:11:41.422 START TEST alias_rpc 00:11:41.422 ************************************ 00:11:41.422 20:03:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:41.422 * Looking for test storage... 00:11:41.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:11:41.422 20:03:23 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:41.422 20:03:23 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59446 00:11:41.422 20:03:23 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:41.422 20:03:23 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59446 00:11:41.422 20:03:23 -- common/autotest_common.sh@817 -- # '[' -z 59446 ']' 00:11:41.422 20:03:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.422 20:03:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:41.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.422 20:03:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.422 20:03:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:41.422 20:03:23 -- common/autotest_common.sh@10 -- # set +x 00:11:41.681 [2024-04-24 20:03:23.712095] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:41.681 [2024-04-24 20:03:23.712174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59446 ] 00:11:41.681 [2024-04-24 20:03:23.853248] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.940 [2024-04-24 20:03:23.957235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.509 20:03:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:42.509 20:03:24 -- common/autotest_common.sh@850 -- # return 0 00:11:42.509 20:03:24 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:11:42.768 20:03:24 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59446 00:11:42.768 20:03:24 -- common/autotest_common.sh@936 -- # '[' -z 59446 ']' 00:11:42.768 20:03:24 -- common/autotest_common.sh@940 -- # kill -0 59446 00:11:42.768 20:03:24 -- common/autotest_common.sh@941 -- # uname 00:11:42.768 20:03:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:42.768 20:03:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59446 00:11:42.768 20:03:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:42.768 20:03:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:42.768 20:03:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59446' 00:11:42.768 killing process with pid 59446 00:11:42.768 20:03:24 -- common/autotest_common.sh@955 -- # kill 59446 00:11:42.768 20:03:24 -- common/autotest_common.sh@960 -- # wait 59446 00:11:43.028 00:11:43.028 real 0m1.690s 00:11:43.028 user 0m1.854s 00:11:43.028 sys 0m0.387s 00:11:43.028 20:03:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:43.028 20:03:25 -- common/autotest_common.sh@10 -- # set +x 00:11:43.028 ************************************ 00:11:43.028 END TEST alias_rpc 00:11:43.028 ************************************ 00:11:43.028 20:03:25 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:11:43.028 20:03:25 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:43.028 20:03:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:43.028 20:03:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:43.028 20:03:25 -- common/autotest_common.sh@10 -- # set +x 00:11:43.287 ************************************ 00:11:43.287 START TEST spdkcli_tcp 00:11:43.287 ************************************ 00:11:43.287 20:03:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:11:43.287 * Looking for test storage... 00:11:43.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:11:43.287 20:03:25 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:11:43.287 20:03:25 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:11:43.287 20:03:25 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:11:43.288 20:03:25 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:11:43.288 20:03:25 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:11:43.288 20:03:25 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:43.288 20:03:25 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:11:43.288 20:03:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:43.288 20:03:25 -- common/autotest_common.sh@10 -- # set +x 00:11:43.288 20:03:25 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59527 00:11:43.288 20:03:25 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:11:43.288 20:03:25 -- spdkcli/tcp.sh@27 -- # waitforlisten 59527 00:11:43.288 20:03:25 -- common/autotest_common.sh@817 -- # '[' -z 59527 ']' 00:11:43.288 20:03:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.288 20:03:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:43.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.288 20:03:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.288 20:03:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:43.288 20:03:25 -- common/autotest_common.sh@10 -- # set +x 00:11:43.547 [2024-04-24 20:03:25.551867] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:43.547 [2024-04-24 20:03:25.551944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59527 ] 00:11:43.547 [2024-04-24 20:03:25.693041] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:43.806 [2024-04-24 20:03:25.804365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.806 [2024-04-24 20:03:25.804373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.376 20:03:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:44.376 20:03:26 -- common/autotest_common.sh@850 -- # return 0 00:11:44.376 20:03:26 -- spdkcli/tcp.sh@31 -- # socat_pid=59544 00:11:44.376 20:03:26 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:11:44.376 20:03:26 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:11:44.636 [ 00:11:44.636 "bdev_malloc_delete", 00:11:44.636 "bdev_malloc_create", 00:11:44.636 "bdev_null_resize", 00:11:44.636 "bdev_null_delete", 00:11:44.636 "bdev_null_create", 00:11:44.636 "bdev_nvme_cuse_unregister", 00:11:44.636 "bdev_nvme_cuse_register", 00:11:44.636 "bdev_opal_new_user", 00:11:44.636 "bdev_opal_set_lock_state", 00:11:44.636 "bdev_opal_delete", 00:11:44.636 "bdev_opal_get_info", 00:11:44.636 "bdev_opal_create", 00:11:44.636 "bdev_nvme_opal_revert", 00:11:44.636 "bdev_nvme_opal_init", 00:11:44.636 "bdev_nvme_send_cmd", 00:11:44.636 "bdev_nvme_get_path_iostat", 00:11:44.636 "bdev_nvme_get_mdns_discovery_info", 00:11:44.636 "bdev_nvme_stop_mdns_discovery", 00:11:44.636 "bdev_nvme_start_mdns_discovery", 00:11:44.636 "bdev_nvme_set_multipath_policy", 00:11:44.636 "bdev_nvme_set_preferred_path", 00:11:44.636 "bdev_nvme_get_io_paths", 00:11:44.636 "bdev_nvme_remove_error_injection", 00:11:44.636 "bdev_nvme_add_error_injection", 00:11:44.636 "bdev_nvme_get_discovery_info", 00:11:44.636 "bdev_nvme_stop_discovery", 00:11:44.636 "bdev_nvme_start_discovery", 00:11:44.636 "bdev_nvme_get_controller_health_info", 00:11:44.636 "bdev_nvme_disable_controller", 00:11:44.636 "bdev_nvme_enable_controller", 00:11:44.636 "bdev_nvme_reset_controller", 00:11:44.636 "bdev_nvme_get_transport_statistics", 00:11:44.636 "bdev_nvme_apply_firmware", 00:11:44.636 "bdev_nvme_detach_controller", 00:11:44.636 "bdev_nvme_get_controllers", 00:11:44.636 "bdev_nvme_attach_controller", 00:11:44.636 "bdev_nvme_set_hotplug", 00:11:44.636 "bdev_nvme_set_options", 00:11:44.636 "bdev_passthru_delete", 00:11:44.636 "bdev_passthru_create", 00:11:44.636 "bdev_lvol_grow_lvstore", 00:11:44.636 "bdev_lvol_get_lvols", 00:11:44.636 "bdev_lvol_get_lvstores", 00:11:44.636 "bdev_lvol_delete", 00:11:44.636 "bdev_lvol_set_read_only", 00:11:44.636 "bdev_lvol_resize", 00:11:44.636 "bdev_lvol_decouple_parent", 00:11:44.636 "bdev_lvol_inflate", 00:11:44.636 "bdev_lvol_rename", 00:11:44.636 "bdev_lvol_clone_bdev", 00:11:44.636 "bdev_lvol_clone", 00:11:44.636 "bdev_lvol_snapshot", 00:11:44.636 "bdev_lvol_create", 00:11:44.636 "bdev_lvol_delete_lvstore", 00:11:44.636 "bdev_lvol_rename_lvstore", 00:11:44.636 "bdev_lvol_create_lvstore", 00:11:44.636 "bdev_raid_set_options", 00:11:44.636 "bdev_raid_remove_base_bdev", 00:11:44.636 "bdev_raid_add_base_bdev", 00:11:44.636 "bdev_raid_delete", 00:11:44.636 "bdev_raid_create", 00:11:44.636 "bdev_raid_get_bdevs", 00:11:44.636 "bdev_error_inject_error", 00:11:44.636 "bdev_error_delete", 00:11:44.636 "bdev_error_create", 00:11:44.636 "bdev_split_delete", 00:11:44.636 "bdev_split_create", 00:11:44.636 "bdev_delay_delete", 00:11:44.636 "bdev_delay_create", 00:11:44.636 "bdev_delay_update_latency", 00:11:44.636 "bdev_zone_block_delete", 00:11:44.636 "bdev_zone_block_create", 00:11:44.636 "blobfs_create", 00:11:44.636 "blobfs_detect", 00:11:44.636 "blobfs_set_cache_size", 00:11:44.636 "bdev_aio_delete", 00:11:44.636 "bdev_aio_rescan", 00:11:44.636 "bdev_aio_create", 00:11:44.636 "bdev_ftl_set_property", 00:11:44.636 "bdev_ftl_get_properties", 00:11:44.636 "bdev_ftl_get_stats", 00:11:44.636 "bdev_ftl_unmap", 00:11:44.636 "bdev_ftl_unload", 00:11:44.637 "bdev_ftl_delete", 00:11:44.637 "bdev_ftl_load", 00:11:44.637 "bdev_ftl_create", 00:11:44.637 "bdev_virtio_attach_controller", 00:11:44.637 "bdev_virtio_scsi_get_devices", 00:11:44.637 "bdev_virtio_detach_controller", 00:11:44.637 "bdev_virtio_blk_set_hotplug", 00:11:44.637 "bdev_iscsi_delete", 00:11:44.637 "bdev_iscsi_create", 00:11:44.637 "bdev_iscsi_set_options", 00:11:44.637 "bdev_uring_delete", 00:11:44.637 "bdev_uring_rescan", 00:11:44.637 "bdev_uring_create", 00:11:44.637 "accel_error_inject_error", 00:11:44.637 "ioat_scan_accel_module", 00:11:44.637 "dsa_scan_accel_module", 00:11:44.637 "iaa_scan_accel_module", 00:11:44.637 "keyring_file_remove_key", 00:11:44.637 "keyring_file_add_key", 00:11:44.637 "iscsi_set_options", 00:11:44.637 "iscsi_get_auth_groups", 00:11:44.637 "iscsi_auth_group_remove_secret", 00:11:44.637 "iscsi_auth_group_add_secret", 00:11:44.637 "iscsi_delete_auth_group", 00:11:44.637 "iscsi_create_auth_group", 00:11:44.637 "iscsi_set_discovery_auth", 00:11:44.637 "iscsi_get_options", 00:11:44.637 "iscsi_target_node_request_logout", 00:11:44.637 "iscsi_target_node_set_redirect", 00:11:44.637 "iscsi_target_node_set_auth", 00:11:44.637 "iscsi_target_node_add_lun", 00:11:44.637 "iscsi_get_stats", 00:11:44.637 "iscsi_get_connections", 00:11:44.637 "iscsi_portal_group_set_auth", 00:11:44.637 "iscsi_start_portal_group", 00:11:44.637 "iscsi_delete_portal_group", 00:11:44.637 "iscsi_create_portal_group", 00:11:44.637 "iscsi_get_portal_groups", 00:11:44.637 "iscsi_delete_target_node", 00:11:44.637 "iscsi_target_node_remove_pg_ig_maps", 00:11:44.637 "iscsi_target_node_add_pg_ig_maps", 00:11:44.637 "iscsi_create_target_node", 00:11:44.637 "iscsi_get_target_nodes", 00:11:44.637 "iscsi_delete_initiator_group", 00:11:44.637 "iscsi_initiator_group_remove_initiators", 00:11:44.637 "iscsi_initiator_group_add_initiators", 00:11:44.637 "iscsi_create_initiator_group", 00:11:44.637 "iscsi_get_initiator_groups", 00:11:44.637 "nvmf_set_crdt", 00:11:44.637 "nvmf_set_config", 00:11:44.637 "nvmf_set_max_subsystems", 00:11:44.637 "nvmf_subsystem_get_listeners", 00:11:44.637 "nvmf_subsystem_get_qpairs", 00:11:44.637 "nvmf_subsystem_get_controllers", 00:11:44.637 "nvmf_get_stats", 00:11:44.637 "nvmf_get_transports", 00:11:44.637 "nvmf_create_transport", 00:11:44.637 "nvmf_get_targets", 00:11:44.637 "nvmf_delete_target", 00:11:44.637 "nvmf_create_target", 00:11:44.637 "nvmf_subsystem_allow_any_host", 00:11:44.637 "nvmf_subsystem_remove_host", 00:11:44.637 "nvmf_subsystem_add_host", 00:11:44.637 "nvmf_ns_remove_host", 00:11:44.637 "nvmf_ns_add_host", 00:11:44.637 "nvmf_subsystem_remove_ns", 00:11:44.637 "nvmf_subsystem_add_ns", 00:11:44.637 "nvmf_subsystem_listener_set_ana_state", 00:11:44.637 "nvmf_discovery_get_referrals", 00:11:44.637 "nvmf_discovery_remove_referral", 00:11:44.637 "nvmf_discovery_add_referral", 00:11:44.637 "nvmf_subsystem_remove_listener", 00:11:44.637 "nvmf_subsystem_add_listener", 00:11:44.637 "nvmf_delete_subsystem", 00:11:44.637 "nvmf_create_subsystem", 00:11:44.637 "nvmf_get_subsystems", 00:11:44.637 "env_dpdk_get_mem_stats", 00:11:44.637 "nbd_get_disks", 00:11:44.637 "nbd_stop_disk", 00:11:44.637 "nbd_start_disk", 00:11:44.637 "ublk_recover_disk", 00:11:44.637 "ublk_get_disks", 00:11:44.637 "ublk_stop_disk", 00:11:44.637 "ublk_start_disk", 00:11:44.637 "ublk_destroy_target", 00:11:44.637 "ublk_create_target", 00:11:44.637 "virtio_blk_create_transport", 00:11:44.637 "virtio_blk_get_transports", 00:11:44.637 "vhost_controller_set_coalescing", 00:11:44.637 "vhost_get_controllers", 00:11:44.637 "vhost_delete_controller", 00:11:44.637 "vhost_create_blk_controller", 00:11:44.637 "vhost_scsi_controller_remove_target", 00:11:44.637 "vhost_scsi_controller_add_target", 00:11:44.637 "vhost_start_scsi_controller", 00:11:44.637 "vhost_create_scsi_controller", 00:11:44.637 "thread_set_cpumask", 00:11:44.637 "framework_get_scheduler", 00:11:44.637 "framework_set_scheduler", 00:11:44.637 "framework_get_reactors", 00:11:44.637 "thread_get_io_channels", 00:11:44.637 "thread_get_pollers", 00:11:44.637 "thread_get_stats", 00:11:44.637 "framework_monitor_context_switch", 00:11:44.637 "spdk_kill_instance", 00:11:44.637 "log_enable_timestamps", 00:11:44.637 "log_get_flags", 00:11:44.637 "log_clear_flag", 00:11:44.637 "log_set_flag", 00:11:44.637 "log_get_level", 00:11:44.637 "log_set_level", 00:11:44.637 "log_get_print_level", 00:11:44.637 "log_set_print_level", 00:11:44.637 "framework_enable_cpumask_locks", 00:11:44.637 "framework_disable_cpumask_locks", 00:11:44.637 "framework_wait_init", 00:11:44.637 "framework_start_init", 00:11:44.637 "scsi_get_devices", 00:11:44.637 "bdev_get_histogram", 00:11:44.637 "bdev_enable_histogram", 00:11:44.637 "bdev_set_qos_limit", 00:11:44.637 "bdev_set_qd_sampling_period", 00:11:44.637 "bdev_get_bdevs", 00:11:44.637 "bdev_reset_iostat", 00:11:44.637 "bdev_get_iostat", 00:11:44.637 "bdev_examine", 00:11:44.637 "bdev_wait_for_examine", 00:11:44.637 "bdev_set_options", 00:11:44.637 "notify_get_notifications", 00:11:44.637 "notify_get_types", 00:11:44.637 "accel_get_stats", 00:11:44.637 "accel_set_options", 00:11:44.637 "accel_set_driver", 00:11:44.637 "accel_crypto_key_destroy", 00:11:44.637 "accel_crypto_keys_get", 00:11:44.637 "accel_crypto_key_create", 00:11:44.637 "accel_assign_opc", 00:11:44.637 "accel_get_module_info", 00:11:44.637 "accel_get_opc_assignments", 00:11:44.637 "vmd_rescan", 00:11:44.637 "vmd_remove_device", 00:11:44.637 "vmd_enable", 00:11:44.637 "sock_set_default_impl", 00:11:44.637 "sock_impl_set_options", 00:11:44.637 "sock_impl_get_options", 00:11:44.637 "iobuf_get_stats", 00:11:44.637 "iobuf_set_options", 00:11:44.637 "framework_get_pci_devices", 00:11:44.637 "framework_get_config", 00:11:44.637 "framework_get_subsystems", 00:11:44.637 "trace_get_info", 00:11:44.637 "trace_get_tpoint_group_mask", 00:11:44.637 "trace_disable_tpoint_group", 00:11:44.637 "trace_enable_tpoint_group", 00:11:44.637 "trace_clear_tpoint_mask", 00:11:44.637 "trace_set_tpoint_mask", 00:11:44.637 "keyring_get_keys", 00:11:44.637 "spdk_get_version", 00:11:44.637 "rpc_get_methods" 00:11:44.637 ] 00:11:44.637 20:03:26 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:11:44.637 20:03:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:44.637 20:03:26 -- common/autotest_common.sh@10 -- # set +x 00:11:44.637 20:03:26 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:44.637 20:03:26 -- spdkcli/tcp.sh@38 -- # killprocess 59527 00:11:44.637 20:03:26 -- common/autotest_common.sh@936 -- # '[' -z 59527 ']' 00:11:44.637 20:03:26 -- common/autotest_common.sh@940 -- # kill -0 59527 00:11:44.637 20:03:26 -- common/autotest_common.sh@941 -- # uname 00:11:44.637 20:03:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:44.637 20:03:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59527 00:11:44.637 20:03:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:44.637 20:03:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:44.637 20:03:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59527' 00:11:44.637 killing process with pid 59527 00:11:44.637 20:03:26 -- common/autotest_common.sh@955 -- # kill 59527 00:11:44.637 20:03:26 -- common/autotest_common.sh@960 -- # wait 59527 00:11:44.897 00:11:44.897 real 0m1.780s 00:11:44.897 user 0m3.170s 00:11:44.897 sys 0m0.477s 00:11:44.897 20:03:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:44.897 20:03:27 -- common/autotest_common.sh@10 -- # set +x 00:11:44.897 ************************************ 00:11:44.897 END TEST spdkcli_tcp 00:11:44.897 ************************************ 00:11:45.158 20:03:27 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:45.158 20:03:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:45.158 20:03:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:45.158 20:03:27 -- common/autotest_common.sh@10 -- # set +x 00:11:45.158 ************************************ 00:11:45.158 START TEST dpdk_mem_utility 00:11:45.158 ************************************ 00:11:45.158 20:03:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:45.158 * Looking for test storage... 00:11:45.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:11:45.158 20:03:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:45.158 20:03:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59624 00:11:45.158 20:03:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:45.158 20:03:27 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59624 00:11:45.158 20:03:27 -- common/autotest_common.sh@817 -- # '[' -z 59624 ']' 00:11:45.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.158 20:03:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.158 20:03:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:45.158 20:03:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.158 20:03:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:45.158 20:03:27 -- common/autotest_common.sh@10 -- # set +x 00:11:45.418 [2024-04-24 20:03:27.465018] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:45.418 [2024-04-24 20:03:27.465099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59624 ] 00:11:45.418 [2024-04-24 20:03:27.596065] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.677 [2024-04-24 20:03:27.698724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.248 20:03:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:46.248 20:03:28 -- common/autotest_common.sh@850 -- # return 0 00:11:46.248 20:03:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:46.248 20:03:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:46.248 20:03:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:46.248 20:03:28 -- common/autotest_common.sh@10 -- # set +x 00:11:46.248 { 00:11:46.248 "filename": "/tmp/spdk_mem_dump.txt" 00:11:46.248 } 00:11:46.248 20:03:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:46.248 20:03:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:46.248 DPDK memory size 814.000000 MiB in 1 heap(s) 00:11:46.248 1 heaps totaling size 814.000000 MiB 00:11:46.248 size: 814.000000 MiB heap id: 0 00:11:46.248 end heaps---------- 00:11:46.248 8 mempools totaling size 598.116089 MiB 00:11:46.248 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:46.248 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:46.248 size: 84.521057 MiB name: bdev_io_59624 00:11:46.248 size: 51.011292 MiB name: evtpool_59624 00:11:46.248 size: 50.003479 MiB name: msgpool_59624 00:11:46.248 size: 21.763794 MiB name: PDU_Pool 00:11:46.248 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:46.248 size: 0.026123 MiB name: Session_Pool 00:11:46.248 end mempools------- 00:11:46.248 6 memzones totaling size 4.142822 MiB 00:11:46.248 size: 1.000366 MiB name: RG_ring_0_59624 00:11:46.248 size: 1.000366 MiB name: RG_ring_1_59624 00:11:46.248 size: 1.000366 MiB name: RG_ring_4_59624 00:11:46.248 size: 1.000366 MiB name: RG_ring_5_59624 00:11:46.248 size: 0.125366 MiB name: RG_ring_2_59624 00:11:46.248 size: 0.015991 MiB name: RG_ring_3_59624 00:11:46.248 end memzones------- 00:11:46.248 20:03:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:11:46.248 heap id: 0 total size: 814.000000 MiB number of busy elements: 299 number of free elements: 15 00:11:46.248 list of free elements. size: 12.472107 MiB 00:11:46.248 element at address: 0x200000400000 with size: 1.999512 MiB 00:11:46.248 element at address: 0x200018e00000 with size: 0.999878 MiB 00:11:46.248 element at address: 0x200019000000 with size: 0.999878 MiB 00:11:46.248 element at address: 0x200003e00000 with size: 0.996277 MiB 00:11:46.248 element at address: 0x200031c00000 with size: 0.994446 MiB 00:11:46.248 element at address: 0x200013800000 with size: 0.978699 MiB 00:11:46.248 element at address: 0x200007000000 with size: 0.959839 MiB 00:11:46.248 element at address: 0x200019200000 with size: 0.936584 MiB 00:11:46.248 element at address: 0x200000200000 with size: 0.833191 MiB 00:11:46.248 element at address: 0x20001aa00000 with size: 0.568604 MiB 00:11:46.248 element at address: 0x20000b200000 with size: 0.489624 MiB 00:11:46.248 element at address: 0x200000800000 with size: 0.486145 MiB 00:11:46.248 element at address: 0x200019400000 with size: 0.485657 MiB 00:11:46.248 element at address: 0x200027e00000 with size: 0.395935 MiB 00:11:46.248 element at address: 0x200003a00000 with size: 0.347839 MiB 00:11:46.248 list of standard malloc elements. size: 199.265320 MiB 00:11:46.248 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:11:46.248 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:11:46.248 element at address: 0x200018efff80 with size: 1.000122 MiB 00:11:46.248 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:11:46.248 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:11:46.248 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:11:46.248 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:11:46.248 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:11:46.248 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:11:46.248 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:11:46.248 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:11:46.248 element at address: 0x20000087c740 with size: 0.000183 MiB 00:11:46.248 element at address: 0x20000087c800 with size: 0.000183 MiB 00:11:46.248 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:11:46.248 element at address: 0x20000087c980 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59180 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59240 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59300 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59480 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59540 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59600 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59780 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59840 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59900 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003adb300 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003adb500 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003affa80 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003affb40 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:11:46.249 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:11:46.249 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:11:46.249 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:11:46.249 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200027e65680 with size: 0.000183 MiB 00:11:46.249 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:11:46.250 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:11:46.250 list of memzone associated elements. size: 602.262573 MiB 00:11:46.250 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:11:46.250 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:46.250 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:11:46.250 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:46.250 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:11:46.250 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59624_0 00:11:46.250 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:11:46.250 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59624_0 00:11:46.250 element at address: 0x200003fff380 with size: 48.003052 MiB 00:11:46.250 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59624_0 00:11:46.250 element at address: 0x2000195be940 with size: 20.255554 MiB 00:11:46.250 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:46.250 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:11:46.250 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:46.250 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:11:46.250 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59624 00:11:46.250 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:11:46.250 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59624 00:11:46.250 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:11:46.250 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59624 00:11:46.250 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:11:46.250 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:46.250 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:11:46.250 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:46.250 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:11:46.250 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:46.250 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:11:46.250 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:46.250 element at address: 0x200003eff180 with size: 1.000488 MiB 00:11:46.250 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59624 00:11:46.250 element at address: 0x200003affc00 with size: 1.000488 MiB 00:11:46.250 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59624 00:11:46.250 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:11:46.250 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59624 00:11:46.250 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:11:46.250 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59624 00:11:46.250 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:11:46.250 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59624 00:11:46.250 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:11:46.250 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:46.250 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:11:46.250 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:46.250 element at address: 0x20001947c540 with size: 0.250488 MiB 00:11:46.250 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:46.250 element at address: 0x200003adf880 with size: 0.125488 MiB 00:11:46.250 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59624 00:11:46.250 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:11:46.250 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:46.250 element at address: 0x200027e65740 with size: 0.023743 MiB 00:11:46.250 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:46.250 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:11:46.250 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59624 00:11:46.250 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:11:46.250 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:46.250 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:11:46.250 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59624 00:11:46.250 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:11:46.250 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59624 00:11:46.250 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:11:46.250 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:46.250 20:03:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:46.250 20:03:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59624 00:11:46.250 20:03:28 -- common/autotest_common.sh@936 -- # '[' -z 59624 ']' 00:11:46.250 20:03:28 -- common/autotest_common.sh@940 -- # kill -0 59624 00:11:46.250 20:03:28 -- common/autotest_common.sh@941 -- # uname 00:11:46.250 20:03:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:46.250 20:03:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59624 00:11:46.250 20:03:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:46.250 20:03:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:46.251 20:03:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59624' 00:11:46.251 killing process with pid 59624 00:11:46.251 20:03:28 -- common/autotest_common.sh@955 -- # kill 59624 00:11:46.251 20:03:28 -- common/autotest_common.sh@960 -- # wait 59624 00:11:46.820 00:11:46.820 real 0m1.565s 00:11:46.820 user 0m1.628s 00:11:46.820 sys 0m0.397s 00:11:46.820 20:03:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:46.820 20:03:28 -- common/autotest_common.sh@10 -- # set +x 00:11:46.820 ************************************ 00:11:46.820 END TEST dpdk_mem_utility 00:11:46.820 ************************************ 00:11:46.820 20:03:28 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:46.820 20:03:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:46.820 20:03:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:46.820 20:03:28 -- common/autotest_common.sh@10 -- # set +x 00:11:46.820 ************************************ 00:11:46.820 START TEST event 00:11:46.820 ************************************ 00:11:46.820 20:03:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:47.101 * Looking for test storage... 00:11:47.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:47.101 20:03:29 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:47.101 20:03:29 -- bdev/nbd_common.sh@6 -- # set -e 00:11:47.101 20:03:29 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:47.101 20:03:29 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:11:47.101 20:03:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:47.101 20:03:29 -- common/autotest_common.sh@10 -- # set +x 00:11:47.101 ************************************ 00:11:47.101 START TEST event_perf 00:11:47.101 ************************************ 00:11:47.101 20:03:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:47.101 Running I/O for 1 seconds...[2024-04-24 20:03:29.231751] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:47.101 [2024-04-24 20:03:29.231891] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59705 ] 00:11:47.359 [2024-04-24 20:03:29.375243] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.359 [2024-04-24 20:03:29.472942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.359 [2024-04-24 20:03:29.473342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.359 [2024-04-24 20:03:29.473145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.359 Running I/O for 1 seconds...[2024-04-24 20:03:29.473345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.738 00:11:48.738 lcore 0: 187502 00:11:48.738 lcore 1: 187503 00:11:48.738 lcore 2: 187501 00:11:48.738 lcore 3: 187501 00:11:48.738 done. 00:11:48.738 00:11:48.738 real 0m1.375s 00:11:48.738 user 0m4.198s 00:11:48.738 sys 0m0.055s 00:11:48.738 20:03:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:48.738 20:03:30 -- common/autotest_common.sh@10 -- # set +x 00:11:48.738 ************************************ 00:11:48.738 END TEST event_perf 00:11:48.738 ************************************ 00:11:48.738 20:03:30 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:48.738 20:03:30 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:48.738 20:03:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:48.738 20:03:30 -- common/autotest_common.sh@10 -- # set +x 00:11:48.738 ************************************ 00:11:48.738 START TEST event_reactor 00:11:48.738 ************************************ 00:11:48.738 20:03:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:48.738 [2024-04-24 20:03:30.742466] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:48.738 [2024-04-24 20:03:30.742556] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59747 ] 00:11:48.738 [2024-04-24 20:03:30.883827] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.738 [2024-04-24 20:03:30.978287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.115 test_start 00:11:50.115 oneshot 00:11:50.115 tick 100 00:11:50.115 tick 100 00:11:50.115 tick 250 00:11:50.115 tick 100 00:11:50.115 tick 100 00:11:50.115 tick 100 00:11:50.115 tick 250 00:11:50.115 tick 500 00:11:50.115 tick 100 00:11:50.115 tick 100 00:11:50.115 tick 250 00:11:50.115 tick 100 00:11:50.115 tick 100 00:11:50.115 test_end 00:11:50.115 ************************************ 00:11:50.115 END TEST event_reactor 00:11:50.115 ************************************ 00:11:50.115 00:11:50.115 real 0m1.360s 00:11:50.115 user 0m1.205s 00:11:50.115 sys 0m0.050s 00:11:50.115 20:03:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:50.115 20:03:32 -- common/autotest_common.sh@10 -- # set +x 00:11:50.115 20:03:32 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:50.115 20:03:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:50.115 20:03:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:50.115 20:03:32 -- common/autotest_common.sh@10 -- # set +x 00:11:50.115 ************************************ 00:11:50.115 START TEST event_reactor_perf 00:11:50.115 ************************************ 00:11:50.115 20:03:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:50.115 [2024-04-24 20:03:32.222050] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:50.115 [2024-04-24 20:03:32.222144] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59782 ] 00:11:50.115 [2024-04-24 20:03:32.364234] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.374 [2024-04-24 20:03:32.472637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.752 test_start 00:11:51.752 test_end 00:11:51.752 Performance: 425610 events per second 00:11:51.752 00:11:51.752 real 0m1.377s 00:11:51.752 user 0m1.217s 00:11:51.752 sys 0m0.053s 00:11:51.752 ************************************ 00:11:51.752 END TEST event_reactor_perf 00:11:51.752 ************************************ 00:11:51.752 20:03:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:51.752 20:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:51.752 20:03:33 -- event/event.sh@49 -- # uname -s 00:11:51.752 20:03:33 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:51.752 20:03:33 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:51.752 20:03:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:51.752 20:03:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:51.752 20:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:51.752 ************************************ 00:11:51.752 START TEST event_scheduler 00:11:51.752 ************************************ 00:11:51.752 20:03:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:51.752 * Looking for test storage... 00:11:51.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:11:51.752 20:03:33 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:51.752 20:03:33 -- scheduler/scheduler.sh@35 -- # scheduler_pid=59855 00:11:51.752 20:03:33 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:51.752 20:03:33 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:51.752 20:03:33 -- scheduler/scheduler.sh@37 -- # waitforlisten 59855 00:11:51.752 20:03:33 -- common/autotest_common.sh@817 -- # '[' -z 59855 ']' 00:11:51.752 20:03:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.752 20:03:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:51.752 20:03:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.752 20:03:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:51.752 20:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:51.752 [2024-04-24 20:03:33.876113] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:51.752 [2024-04-24 20:03:33.876266] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59855 ] 00:11:52.010 [2024-04-24 20:03:34.016406] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.010 [2024-04-24 20:03:34.122198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.010 [2024-04-24 20:03:34.122262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.010 [2024-04-24 20:03:34.122438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.010 [2024-04-24 20:03:34.122441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.577 20:03:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:52.577 20:03:34 -- common/autotest_common.sh@850 -- # return 0 00:11:52.577 20:03:34 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:52.577 20:03:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.577 20:03:34 -- common/autotest_common.sh@10 -- # set +x 00:11:52.577 POWER: Env isn't set yet! 00:11:52.577 POWER: Attempting to initialise ACPI cpufreq power management... 00:11:52.577 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:52.577 POWER: Cannot set governor of lcore 0 to userspace 00:11:52.577 POWER: Attempting to initialise PSTAT power management... 00:11:52.577 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:52.577 POWER: Cannot set governor of lcore 0 to performance 00:11:52.577 POWER: Attempting to initialise AMD PSTATE power management... 00:11:52.577 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:52.577 POWER: Cannot set governor of lcore 0 to userspace 00:11:52.577 POWER: Attempting to initialise CPPC power management... 00:11:52.577 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:52.577 POWER: Cannot set governor of lcore 0 to userspace 00:11:52.577 POWER: Attempting to initialise VM power management... 00:11:52.577 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:11:52.577 POWER: Unable to set Power Management Environment for lcore 0 00:11:52.577 [2024-04-24 20:03:34.734783] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:11:52.577 [2024-04-24 20:03:34.734819] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:11:52.577 [2024-04-24 20:03:34.734850] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:11:52.577 20:03:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.577 20:03:34 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:52.577 20:03:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.577 20:03:34 -- common/autotest_common.sh@10 -- # set +x 00:11:52.577 [2024-04-24 20:03:34.814343] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:52.577 20:03:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.577 20:03:34 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:52.577 20:03:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:52.577 20:03:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:52.577 20:03:34 -- common/autotest_common.sh@10 -- # set +x 00:11:52.835 ************************************ 00:11:52.835 START TEST scheduler_create_thread 00:11:52.835 ************************************ 00:11:52.835 20:03:34 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:11:52.835 20:03:34 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:52.835 20:03:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.835 20:03:34 -- common/autotest_common.sh@10 -- # set +x 00:11:52.835 2 00:11:52.835 20:03:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.835 20:03:34 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:52.835 20:03:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.835 20:03:34 -- common/autotest_common.sh@10 -- # set +x 00:11:52.835 3 00:11:52.835 20:03:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.835 20:03:34 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:52.835 20:03:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.835 20:03:34 -- common/autotest_common.sh@10 -- # set +x 00:11:52.835 4 00:11:52.835 20:03:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.835 20:03:34 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:52.835 20:03:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.835 20:03:34 -- common/autotest_common.sh@10 -- # set +x 00:11:52.835 5 00:11:52.835 20:03:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.835 20:03:34 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:52.835 20:03:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.835 20:03:34 -- common/autotest_common.sh@10 -- # set +x 00:11:52.835 6 00:11:52.835 20:03:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.835 20:03:34 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:52.835 20:03:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.835 20:03:34 -- common/autotest_common.sh@10 -- # set +x 00:11:52.835 7 00:11:52.835 20:03:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.835 20:03:34 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:52.835 20:03:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.835 20:03:34 -- common/autotest_common.sh@10 -- # set +x 00:11:52.835 8 00:11:52.835 20:03:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.835 20:03:34 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:52.835 20:03:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.835 20:03:34 -- common/autotest_common.sh@10 -- # set +x 00:11:52.835 9 00:11:52.835 20:03:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:52.835 20:03:35 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:52.835 20:03:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:52.835 20:03:35 -- common/autotest_common.sh@10 -- # set +x 00:11:53.401 10 00:11:53.401 20:03:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:53.401 20:03:35 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:53.401 20:03:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:53.401 20:03:35 -- common/autotest_common.sh@10 -- # set +x 00:11:54.776 20:03:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:54.776 20:03:36 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:54.776 20:03:36 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:54.776 20:03:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:54.776 20:03:36 -- common/autotest_common.sh@10 -- # set +x 00:11:55.340 20:03:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.599 20:03:37 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:55.599 20:03:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.599 20:03:37 -- common/autotest_common.sh@10 -- # set +x 00:11:56.182 20:03:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:56.182 20:03:38 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:56.182 20:03:38 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:56.182 20:03:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:56.182 20:03:38 -- common/autotest_common.sh@10 -- # set +x 00:11:57.118 ************************************ 00:11:57.118 END TEST scheduler_create_thread 00:11:57.118 ************************************ 00:11:57.118 20:03:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:57.118 00:11:57.118 real 0m4.208s 00:11:57.118 user 0m0.029s 00:11:57.118 sys 0m0.006s 00:11:57.118 20:03:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:57.118 20:03:39 -- common/autotest_common.sh@10 -- # set +x 00:11:57.118 20:03:39 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:57.118 20:03:39 -- scheduler/scheduler.sh@46 -- # killprocess 59855 00:11:57.118 20:03:39 -- common/autotest_common.sh@936 -- # '[' -z 59855 ']' 00:11:57.118 20:03:39 -- common/autotest_common.sh@940 -- # kill -0 59855 00:11:57.118 20:03:39 -- common/autotest_common.sh@941 -- # uname 00:11:57.118 20:03:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:57.118 20:03:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59855 00:11:57.118 killing process with pid 59855 00:11:57.118 20:03:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:57.118 20:03:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:57.118 20:03:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59855' 00:11:57.118 20:03:39 -- common/autotest_common.sh@955 -- # kill 59855 00:11:57.118 20:03:39 -- common/autotest_common.sh@960 -- # wait 59855 00:11:57.377 [2024-04-24 20:03:39.390218] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:57.637 00:11:57.637 real 0m5.971s 00:11:57.637 user 0m12.928s 00:11:57.637 sys 0m0.417s 00:11:57.637 20:03:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:57.637 ************************************ 00:11:57.637 END TEST event_scheduler 00:11:57.637 ************************************ 00:11:57.637 20:03:39 -- common/autotest_common.sh@10 -- # set +x 00:11:57.637 20:03:39 -- event/event.sh@51 -- # modprobe -n nbd 00:11:57.637 20:03:39 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:57.637 20:03:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:57.637 20:03:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:57.637 20:03:39 -- common/autotest_common.sh@10 -- # set +x 00:11:57.637 ************************************ 00:11:57.637 START TEST app_repeat 00:11:57.637 ************************************ 00:11:57.637 20:03:39 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:11:57.637 20:03:39 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:57.637 20:03:39 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:57.637 20:03:39 -- event/event.sh@13 -- # local nbd_list 00:11:57.637 20:03:39 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:57.637 20:03:39 -- event/event.sh@14 -- # local bdev_list 00:11:57.637 20:03:39 -- event/event.sh@15 -- # local repeat_times=4 00:11:57.637 20:03:39 -- event/event.sh@17 -- # modprobe nbd 00:11:57.637 20:03:39 -- event/event.sh@19 -- # repeat_pid=59978 00:11:57.637 20:03:39 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:57.637 20:03:39 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:57.637 20:03:39 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59978' 00:11:57.637 Process app_repeat pid: 59978 00:11:57.637 20:03:39 -- event/event.sh@23 -- # for i in {0..2} 00:11:57.637 spdk_app_start Round 0 00:11:57.637 20:03:39 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:57.637 20:03:39 -- event/event.sh@25 -- # waitforlisten 59978 /var/tmp/spdk-nbd.sock 00:11:57.637 20:03:39 -- common/autotest_common.sh@817 -- # '[' -z 59978 ']' 00:11:57.637 20:03:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:57.637 20:03:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:57.637 20:03:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:57.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:57.637 20:03:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:57.637 20:03:39 -- common/autotest_common.sh@10 -- # set +x 00:11:57.637 [2024-04-24 20:03:39.848058] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:11:57.637 [2024-04-24 20:03:39.848142] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59978 ] 00:11:57.896 [2024-04-24 20:03:39.987889] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:57.896 [2024-04-24 20:03:40.090468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.896 [2024-04-24 20:03:40.090475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.831 20:03:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:58.831 20:03:40 -- common/autotest_common.sh@850 -- # return 0 00:11:58.831 20:03:40 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:58.831 Malloc0 00:11:58.831 20:03:40 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:59.089 Malloc1 00:11:59.089 20:03:41 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:59.089 20:03:41 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:59.089 20:03:41 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:59.089 20:03:41 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:59.089 20:03:41 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:59.089 20:03:41 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:59.089 20:03:41 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:59.089 20:03:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:59.089 20:03:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:59.089 20:03:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:59.089 20:03:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:59.089 20:03:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:59.089 20:03:41 -- bdev/nbd_common.sh@12 -- # local i 00:11:59.089 20:03:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:59.089 20:03:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:59.089 20:03:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:59.347 /dev/nbd0 00:11:59.347 20:03:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:59.347 20:03:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:59.347 20:03:41 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:11:59.347 20:03:41 -- common/autotest_common.sh@855 -- # local i 00:11:59.347 20:03:41 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:59.347 20:03:41 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:59.347 20:03:41 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:11:59.347 20:03:41 -- common/autotest_common.sh@859 -- # break 00:11:59.347 20:03:41 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:59.347 20:03:41 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:59.347 20:03:41 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:59.347 1+0 records in 00:11:59.347 1+0 records out 00:11:59.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364691 s, 11.2 MB/s 00:11:59.347 20:03:41 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:59.347 20:03:41 -- common/autotest_common.sh@872 -- # size=4096 00:11:59.347 20:03:41 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:59.347 20:03:41 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:59.347 20:03:41 -- common/autotest_common.sh@875 -- # return 0 00:11:59.347 20:03:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:59.347 20:03:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:59.347 20:03:41 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:59.607 /dev/nbd1 00:11:59.607 20:03:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:59.607 20:03:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:59.607 20:03:41 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:11:59.607 20:03:41 -- common/autotest_common.sh@855 -- # local i 00:11:59.607 20:03:41 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:59.607 20:03:41 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:59.607 20:03:41 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:11:59.607 20:03:41 -- common/autotest_common.sh@859 -- # break 00:11:59.607 20:03:41 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:59.607 20:03:41 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:59.607 20:03:41 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:59.607 1+0 records in 00:11:59.607 1+0 records out 00:11:59.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420118 s, 9.7 MB/s 00:11:59.607 20:03:41 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:59.607 20:03:41 -- common/autotest_common.sh@872 -- # size=4096 00:11:59.607 20:03:41 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:59.607 20:03:41 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:59.607 20:03:41 -- common/autotest_common.sh@875 -- # return 0 00:11:59.607 20:03:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:59.607 20:03:41 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:59.607 20:03:41 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:59.607 20:03:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:59.607 20:03:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:59.867 { 00:11:59.867 "nbd_device": "/dev/nbd0", 00:11:59.867 "bdev_name": "Malloc0" 00:11:59.867 }, 00:11:59.867 { 00:11:59.867 "nbd_device": "/dev/nbd1", 00:11:59.867 "bdev_name": "Malloc1" 00:11:59.867 } 00:11:59.867 ]' 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:59.867 { 00:11:59.867 "nbd_device": "/dev/nbd0", 00:11:59.867 "bdev_name": "Malloc0" 00:11:59.867 }, 00:11:59.867 { 00:11:59.867 "nbd_device": "/dev/nbd1", 00:11:59.867 "bdev_name": "Malloc1" 00:11:59.867 } 00:11:59.867 ]' 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:59.867 /dev/nbd1' 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:59.867 /dev/nbd1' 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@65 -- # count=2 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@66 -- # echo 2 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@95 -- # count=2 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:59.867 256+0 records in 00:11:59.867 256+0 records out 00:11:59.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00616594 s, 170 MB/s 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.867 20:03:41 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:59.867 256+0 records in 00:11:59.867 256+0 records out 00:11:59.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175194 s, 59.9 MB/s 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:59.867 256+0 records in 00:11:59.867 256+0 records out 00:11:59.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205971 s, 50.9 MB/s 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@51 -- # local i 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.867 20:03:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:00.126 20:03:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:00.126 20:03:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:00.126 20:03:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:00.126 20:03:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.126 20:03:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.126 20:03:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:00.126 20:03:42 -- bdev/nbd_common.sh@41 -- # break 00:12:00.126 20:03:42 -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.126 20:03:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.126 20:03:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:00.384 20:03:42 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:00.384 20:03:42 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:00.384 20:03:42 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:00.384 20:03:42 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.384 20:03:42 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.384 20:03:42 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:00.384 20:03:42 -- bdev/nbd_common.sh@41 -- # break 00:12:00.384 20:03:42 -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.384 20:03:42 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:00.384 20:03:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:00.384 20:03:42 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:00.643 20:03:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:00.643 20:03:42 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:00.643 20:03:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:00.643 20:03:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:00.643 20:03:42 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:00.643 20:03:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:00.643 20:03:42 -- bdev/nbd_common.sh@65 -- # true 00:12:00.643 20:03:42 -- bdev/nbd_common.sh@65 -- # count=0 00:12:00.643 20:03:42 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:00.643 20:03:42 -- bdev/nbd_common.sh@104 -- # count=0 00:12:00.643 20:03:42 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:00.643 20:03:42 -- bdev/nbd_common.sh@109 -- # return 0 00:12:00.643 20:03:42 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:00.901 20:03:43 -- event/event.sh@35 -- # sleep 3 00:12:01.159 [2024-04-24 20:03:43.273362] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:01.159 [2024-04-24 20:03:43.377022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.159 [2024-04-24 20:03:43.377026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.417 [2024-04-24 20:03:43.421057] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:01.417 [2024-04-24 20:03:43.421110] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:03.955 20:03:46 -- event/event.sh@23 -- # for i in {0..2} 00:12:03.955 spdk_app_start Round 1 00:12:03.955 20:03:46 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:12:03.955 20:03:46 -- event/event.sh@25 -- # waitforlisten 59978 /var/tmp/spdk-nbd.sock 00:12:03.955 20:03:46 -- common/autotest_common.sh@817 -- # '[' -z 59978 ']' 00:12:03.955 20:03:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:03.955 20:03:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:03.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:03.955 20:03:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:03.955 20:03:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:03.955 20:03:46 -- common/autotest_common.sh@10 -- # set +x 00:12:04.214 20:03:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:04.214 20:03:46 -- common/autotest_common.sh@850 -- # return 0 00:12:04.214 20:03:46 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:04.473 Malloc0 00:12:04.473 20:03:46 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:04.733 Malloc1 00:12:04.733 20:03:46 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:04.733 20:03:46 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.733 20:03:46 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:04.733 20:03:46 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:04.733 20:03:46 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:04.733 20:03:46 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:04.733 20:03:46 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:04.733 20:03:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.733 20:03:46 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:04.733 20:03:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:04.733 20:03:46 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:04.733 20:03:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:04.733 20:03:46 -- bdev/nbd_common.sh@12 -- # local i 00:12:04.733 20:03:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:04.733 20:03:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:04.733 20:03:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:04.733 /dev/nbd0 00:12:04.992 20:03:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:04.992 20:03:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:04.992 20:03:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:12:04.992 20:03:46 -- common/autotest_common.sh@855 -- # local i 00:12:04.992 20:03:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:04.992 20:03:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:04.992 20:03:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:12:04.992 20:03:46 -- common/autotest_common.sh@859 -- # break 00:12:04.992 20:03:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:04.992 20:03:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:04.992 20:03:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:04.992 1+0 records in 00:12:04.992 1+0 records out 00:12:04.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339774 s, 12.1 MB/s 00:12:04.992 20:03:47 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:04.992 20:03:47 -- common/autotest_common.sh@872 -- # size=4096 00:12:04.992 20:03:47 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:04.992 20:03:47 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:04.992 20:03:47 -- common/autotest_common.sh@875 -- # return 0 00:12:04.992 20:03:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.992 20:03:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:04.992 20:03:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:04.992 /dev/nbd1 00:12:04.992 20:03:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:04.992 20:03:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:04.992 20:03:47 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:12:04.992 20:03:47 -- common/autotest_common.sh@855 -- # local i 00:12:04.992 20:03:47 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:04.992 20:03:47 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:04.992 20:03:47 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:12:05.254 20:03:47 -- common/autotest_common.sh@859 -- # break 00:12:05.254 20:03:47 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:05.254 20:03:47 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:05.254 20:03:47 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:05.254 1+0 records in 00:12:05.254 1+0 records out 00:12:05.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570987 s, 7.2 MB/s 00:12:05.254 20:03:47 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:05.254 20:03:47 -- common/autotest_common.sh@872 -- # size=4096 00:12:05.254 20:03:47 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:05.254 20:03:47 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:05.254 20:03:47 -- common/autotest_common.sh@875 -- # return 0 00:12:05.254 20:03:47 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.254 20:03:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:05.254 20:03:47 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:05.254 20:03:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.254 20:03:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:05.254 20:03:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:05.254 { 00:12:05.254 "nbd_device": "/dev/nbd0", 00:12:05.254 "bdev_name": "Malloc0" 00:12:05.254 }, 00:12:05.254 { 00:12:05.254 "nbd_device": "/dev/nbd1", 00:12:05.254 "bdev_name": "Malloc1" 00:12:05.254 } 00:12:05.254 ]' 00:12:05.254 20:03:47 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:05.254 { 00:12:05.254 "nbd_device": "/dev/nbd0", 00:12:05.254 "bdev_name": "Malloc0" 00:12:05.254 }, 00:12:05.254 { 00:12:05.254 "nbd_device": "/dev/nbd1", 00:12:05.254 "bdev_name": "Malloc1" 00:12:05.254 } 00:12:05.254 ]' 00:12:05.254 20:03:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:05.514 20:03:47 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:05.515 /dev/nbd1' 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:05.515 /dev/nbd1' 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@65 -- # count=2 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@66 -- # echo 2 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@95 -- # count=2 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:05.515 256+0 records in 00:12:05.515 256+0 records out 00:12:05.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00542988 s, 193 MB/s 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:05.515 256+0 records in 00:12:05.515 256+0 records out 00:12:05.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182864 s, 57.3 MB/s 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:05.515 256+0 records in 00:12:05.515 256+0 records out 00:12:05.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262153 s, 40.0 MB/s 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@51 -- # local i 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.515 20:03:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:05.774 20:03:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:05.774 20:03:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:05.774 20:03:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:05.774 20:03:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:05.774 20:03:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.774 20:03:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:05.774 20:03:47 -- bdev/nbd_common.sh@41 -- # break 00:12:05.774 20:03:47 -- bdev/nbd_common.sh@45 -- # return 0 00:12:05.774 20:03:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:05.774 20:03:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:06.033 20:03:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:06.033 20:03:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:06.033 20:03:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:06.033 20:03:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.033 20:03:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.033 20:03:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:06.033 20:03:48 -- bdev/nbd_common.sh@41 -- # break 00:12:06.033 20:03:48 -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.033 20:03:48 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:06.033 20:03:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:06.033 20:03:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:06.292 20:03:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:06.292 20:03:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:06.292 20:03:48 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:06.292 20:03:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:06.292 20:03:48 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:06.292 20:03:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:06.292 20:03:48 -- bdev/nbd_common.sh@65 -- # true 00:12:06.292 20:03:48 -- bdev/nbd_common.sh@65 -- # count=0 00:12:06.292 20:03:48 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:06.292 20:03:48 -- bdev/nbd_common.sh@104 -- # count=0 00:12:06.292 20:03:48 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:06.292 20:03:48 -- bdev/nbd_common.sh@109 -- # return 0 00:12:06.292 20:03:48 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:06.552 20:03:48 -- event/event.sh@35 -- # sleep 3 00:12:06.552 [2024-04-24 20:03:48.782312] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:06.812 [2024-04-24 20:03:48.883603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.812 [2024-04-24 20:03:48.883606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.812 [2024-04-24 20:03:48.928494] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:06.812 [2024-04-24 20:03:48.928541] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:09.405 spdk_app_start Round 2 00:12:09.405 20:03:51 -- event/event.sh@23 -- # for i in {0..2} 00:12:09.405 20:03:51 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:09.405 20:03:51 -- event/event.sh@25 -- # waitforlisten 59978 /var/tmp/spdk-nbd.sock 00:12:09.405 20:03:51 -- common/autotest_common.sh@817 -- # '[' -z 59978 ']' 00:12:09.405 20:03:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:09.405 20:03:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:09.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:09.405 20:03:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:09.405 20:03:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:09.405 20:03:51 -- common/autotest_common.sh@10 -- # set +x 00:12:09.664 20:03:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:09.664 20:03:51 -- common/autotest_common.sh@850 -- # return 0 00:12:09.664 20:03:51 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:09.946 Malloc0 00:12:09.946 20:03:52 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:10.205 Malloc1 00:12:10.205 20:03:52 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:10.205 20:03:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:10.205 20:03:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:10.205 20:03:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:10.205 20:03:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:10.205 20:03:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:10.205 20:03:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:10.205 20:03:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:10.205 20:03:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:10.205 20:03:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:10.205 20:03:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:10.205 20:03:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:10.205 20:03:52 -- bdev/nbd_common.sh@12 -- # local i 00:12:10.205 20:03:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:10.205 20:03:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:10.205 20:03:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:10.463 /dev/nbd0 00:12:10.463 20:03:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:10.463 20:03:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:10.463 20:03:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:12:10.463 20:03:52 -- common/autotest_common.sh@855 -- # local i 00:12:10.463 20:03:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:10.463 20:03:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:10.463 20:03:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:12:10.463 20:03:52 -- common/autotest_common.sh@859 -- # break 00:12:10.463 20:03:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:10.463 20:03:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:10.463 20:03:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:10.463 1+0 records in 00:12:10.463 1+0 records out 00:12:10.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340035 s, 12.0 MB/s 00:12:10.464 20:03:52 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:10.464 20:03:52 -- common/autotest_common.sh@872 -- # size=4096 00:12:10.464 20:03:52 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:10.464 20:03:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:10.464 20:03:52 -- common/autotest_common.sh@875 -- # return 0 00:12:10.464 20:03:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:10.464 20:03:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:10.464 20:03:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:10.723 /dev/nbd1 00:12:10.723 20:03:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:10.723 20:03:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:10.723 20:03:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:12:10.723 20:03:52 -- common/autotest_common.sh@855 -- # local i 00:12:10.723 20:03:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:10.723 20:03:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:10.723 20:03:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:12:10.723 20:03:52 -- common/autotest_common.sh@859 -- # break 00:12:10.723 20:03:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:10.723 20:03:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:10.723 20:03:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:10.723 1+0 records in 00:12:10.723 1+0 records out 00:12:10.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209626 s, 19.5 MB/s 00:12:10.723 20:03:52 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:10.723 20:03:52 -- common/autotest_common.sh@872 -- # size=4096 00:12:10.723 20:03:52 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:10.723 20:03:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:10.723 20:03:52 -- common/autotest_common.sh@875 -- # return 0 00:12:10.723 20:03:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:10.723 20:03:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:10.723 20:03:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:10.723 20:03:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:10.723 20:03:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:10.981 20:03:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:10.981 { 00:12:10.981 "nbd_device": "/dev/nbd0", 00:12:10.981 "bdev_name": "Malloc0" 00:12:10.981 }, 00:12:10.981 { 00:12:10.981 "nbd_device": "/dev/nbd1", 00:12:10.981 "bdev_name": "Malloc1" 00:12:10.981 } 00:12:10.981 ]' 00:12:10.981 20:03:53 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:10.981 { 00:12:10.982 "nbd_device": "/dev/nbd0", 00:12:10.982 "bdev_name": "Malloc0" 00:12:10.982 }, 00:12:10.982 { 00:12:10.982 "nbd_device": "/dev/nbd1", 00:12:10.982 "bdev_name": "Malloc1" 00:12:10.982 } 00:12:10.982 ]' 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:10.982 /dev/nbd1' 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:10.982 /dev/nbd1' 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@65 -- # count=2 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@66 -- # echo 2 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@95 -- # count=2 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:10.982 256+0 records in 00:12:10.982 256+0 records out 00:12:10.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00626539 s, 167 MB/s 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:10.982 256+0 records in 00:12:10.982 256+0 records out 00:12:10.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019394 s, 54.1 MB/s 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:10.982 256+0 records in 00:12:10.982 256+0 records out 00:12:10.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218253 s, 48.0 MB/s 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@51 -- # local i 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.982 20:03:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:11.241 20:03:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:11.241 20:03:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:11.241 20:03:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:11.241 20:03:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.241 20:03:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.241 20:03:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:11.241 20:03:53 -- bdev/nbd_common.sh@41 -- # break 00:12:11.241 20:03:53 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.241 20:03:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.241 20:03:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:11.501 20:03:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:11.501 20:03:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:11.501 20:03:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:11.501 20:03:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.501 20:03:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.501 20:03:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:11.501 20:03:53 -- bdev/nbd_common.sh@41 -- # break 00:12:11.501 20:03:53 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.501 20:03:53 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:11.501 20:03:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:11.501 20:03:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:11.761 20:03:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:11.761 20:03:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:11.761 20:03:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:11.761 20:03:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:11.761 20:03:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:11.761 20:03:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:11.761 20:03:53 -- bdev/nbd_common.sh@65 -- # true 00:12:11.761 20:03:53 -- bdev/nbd_common.sh@65 -- # count=0 00:12:11.761 20:03:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:11.761 20:03:53 -- bdev/nbd_common.sh@104 -- # count=0 00:12:11.761 20:03:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:11.761 20:03:53 -- bdev/nbd_common.sh@109 -- # return 0 00:12:11.761 20:03:53 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:12.020 20:03:54 -- event/event.sh@35 -- # sleep 3 00:12:12.277 [2024-04-24 20:03:54.288998] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:12.277 [2024-04-24 20:03:54.390652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.277 [2024-04-24 20:03:54.390653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.277 [2024-04-24 20:03:54.434302] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:12.278 [2024-04-24 20:03:54.434359] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:15.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:15.577 20:03:57 -- event/event.sh@38 -- # waitforlisten 59978 /var/tmp/spdk-nbd.sock 00:12:15.577 20:03:57 -- common/autotest_common.sh@817 -- # '[' -z 59978 ']' 00:12:15.577 20:03:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:15.577 20:03:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:15.577 20:03:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:15.577 20:03:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:15.577 20:03:57 -- common/autotest_common.sh@10 -- # set +x 00:12:15.577 20:03:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:15.577 20:03:57 -- common/autotest_common.sh@850 -- # return 0 00:12:15.577 20:03:57 -- event/event.sh@39 -- # killprocess 59978 00:12:15.577 20:03:57 -- common/autotest_common.sh@936 -- # '[' -z 59978 ']' 00:12:15.577 20:03:57 -- common/autotest_common.sh@940 -- # kill -0 59978 00:12:15.577 20:03:57 -- common/autotest_common.sh@941 -- # uname 00:12:15.577 20:03:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:15.577 20:03:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59978 00:12:15.577 killing process with pid 59978 00:12:15.577 20:03:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:15.577 20:03:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:15.577 20:03:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59978' 00:12:15.577 20:03:57 -- common/autotest_common.sh@955 -- # kill 59978 00:12:15.578 20:03:57 -- common/autotest_common.sh@960 -- # wait 59978 00:12:15.578 spdk_app_start is called in Round 0. 00:12:15.578 Shutdown signal received, stop current app iteration 00:12:15.578 Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 reinitialization... 00:12:15.578 spdk_app_start is called in Round 1. 00:12:15.578 Shutdown signal received, stop current app iteration 00:12:15.578 Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 reinitialization... 00:12:15.578 spdk_app_start is called in Round 2. 00:12:15.578 Shutdown signal received, stop current app iteration 00:12:15.578 Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 reinitialization... 00:12:15.578 spdk_app_start is called in Round 3. 00:12:15.578 Shutdown signal received, stop current app iteration 00:12:15.578 20:03:57 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:12:15.578 20:03:57 -- event/event.sh@42 -- # return 0 00:12:15.578 00:12:15.578 real 0m17.802s 00:12:15.578 user 0m39.316s 00:12:15.578 sys 0m2.540s 00:12:15.578 20:03:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:15.578 20:03:57 -- common/autotest_common.sh@10 -- # set +x 00:12:15.578 ************************************ 00:12:15.578 END TEST app_repeat 00:12:15.578 ************************************ 00:12:15.578 20:03:57 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:12:15.578 20:03:57 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:15.578 20:03:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:15.578 20:03:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:15.578 20:03:57 -- common/autotest_common.sh@10 -- # set +x 00:12:15.578 ************************************ 00:12:15.578 START TEST cpu_locks 00:12:15.578 ************************************ 00:12:15.578 20:03:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:15.837 * Looking for test storage... 00:12:15.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:15.837 20:03:57 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:12:15.837 20:03:57 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:12:15.837 20:03:57 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:12:15.837 20:03:57 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:12:15.837 20:03:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:15.837 20:03:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:15.837 20:03:57 -- common/autotest_common.sh@10 -- # set +x 00:12:15.837 ************************************ 00:12:15.837 START TEST default_locks 00:12:15.837 ************************************ 00:12:15.837 20:03:57 -- common/autotest_common.sh@1111 -- # default_locks 00:12:15.837 20:03:57 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60410 00:12:15.837 20:03:57 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:15.837 20:03:57 -- event/cpu_locks.sh@47 -- # waitforlisten 60410 00:12:15.837 20:03:57 -- common/autotest_common.sh@817 -- # '[' -z 60410 ']' 00:12:15.837 20:03:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.837 20:03:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:15.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.837 20:03:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.837 20:03:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:15.837 20:03:57 -- common/autotest_common.sh@10 -- # set +x 00:12:15.837 [2024-04-24 20:03:58.007087] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:15.837 [2024-04-24 20:03:58.007167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60410 ] 00:12:16.096 [2024-04-24 20:03:58.143327] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.096 [2024-04-24 20:03:58.297503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.664 20:03:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:16.664 20:03:58 -- common/autotest_common.sh@850 -- # return 0 00:12:16.664 20:03:58 -- event/cpu_locks.sh@49 -- # locks_exist 60410 00:12:16.923 20:03:58 -- event/cpu_locks.sh@22 -- # lslocks -p 60410 00:12:16.923 20:03:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:16.924 20:03:59 -- event/cpu_locks.sh@50 -- # killprocess 60410 00:12:16.924 20:03:59 -- common/autotest_common.sh@936 -- # '[' -z 60410 ']' 00:12:16.924 20:03:59 -- common/autotest_common.sh@940 -- # kill -0 60410 00:12:16.924 20:03:59 -- common/autotest_common.sh@941 -- # uname 00:12:16.924 20:03:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:16.924 20:03:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60410 00:12:17.183 20:03:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:17.183 20:03:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:17.183 20:03:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60410' 00:12:17.183 killing process with pid 60410 00:12:17.183 20:03:59 -- common/autotest_common.sh@955 -- # kill 60410 00:12:17.183 20:03:59 -- common/autotest_common.sh@960 -- # wait 60410 00:12:17.767 20:03:59 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60410 00:12:17.767 20:03:59 -- common/autotest_common.sh@638 -- # local es=0 00:12:17.767 20:03:59 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 60410 00:12:17.767 20:03:59 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:12:17.767 20:03:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:17.767 20:03:59 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:12:17.767 20:03:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:17.767 20:03:59 -- common/autotest_common.sh@641 -- # waitforlisten 60410 00:12:17.767 20:03:59 -- common/autotest_common.sh@817 -- # '[' -z 60410 ']' 00:12:17.767 20:03:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.767 20:03:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:17.767 20:03:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.767 20:03:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:17.767 20:03:59 -- common/autotest_common.sh@10 -- # set +x 00:12:17.767 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (60410) - No such process 00:12:17.767 ERROR: process (pid: 60410) is no longer running 00:12:17.767 20:03:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:17.767 20:03:59 -- common/autotest_common.sh@850 -- # return 1 00:12:17.767 20:03:59 -- common/autotest_common.sh@641 -- # es=1 00:12:17.767 20:03:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:17.767 20:03:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:17.767 20:03:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:17.767 20:03:59 -- event/cpu_locks.sh@54 -- # no_locks 00:12:17.767 20:03:59 -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:17.767 20:03:59 -- event/cpu_locks.sh@26 -- # local lock_files 00:12:17.767 20:03:59 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:17.767 00:12:17.768 real 0m1.876s 00:12:17.768 user 0m1.838s 00:12:17.768 sys 0m0.568s 00:12:17.768 20:03:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:17.768 20:03:59 -- common/autotest_common.sh@10 -- # set +x 00:12:17.768 ************************************ 00:12:17.768 END TEST default_locks 00:12:17.768 ************************************ 00:12:17.768 20:03:59 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:17.768 20:03:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:17.768 20:03:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:17.768 20:03:59 -- common/autotest_common.sh@10 -- # set +x 00:12:17.768 ************************************ 00:12:17.768 START TEST default_locks_via_rpc 00:12:17.768 ************************************ 00:12:17.768 20:03:59 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:12:17.768 20:03:59 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60467 00:12:17.768 20:03:59 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:17.768 20:03:59 -- event/cpu_locks.sh@63 -- # waitforlisten 60467 00:12:17.768 20:03:59 -- common/autotest_common.sh@817 -- # '[' -z 60467 ']' 00:12:17.768 20:03:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.768 20:03:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:17.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.768 20:03:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.768 20:03:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:17.768 20:03:59 -- common/autotest_common.sh@10 -- # set +x 00:12:18.030 [2024-04-24 20:04:00.032726] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:18.030 [2024-04-24 20:04:00.032836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60467 ] 00:12:18.030 [2024-04-24 20:04:00.174607] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.289 [2024-04-24 20:04:00.329618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.858 20:04:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:18.858 20:04:00 -- common/autotest_common.sh@850 -- # return 0 00:12:18.858 20:04:00 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:18.858 20:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.858 20:04:00 -- common/autotest_common.sh@10 -- # set +x 00:12:18.858 20:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.858 20:04:00 -- event/cpu_locks.sh@67 -- # no_locks 00:12:18.858 20:04:00 -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:18.858 20:04:00 -- event/cpu_locks.sh@26 -- # local lock_files 00:12:18.858 20:04:00 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:18.858 20:04:00 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:18.858 20:04:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:18.858 20:04:00 -- common/autotest_common.sh@10 -- # set +x 00:12:18.858 20:04:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:18.858 20:04:00 -- event/cpu_locks.sh@71 -- # locks_exist 60467 00:12:18.858 20:04:00 -- event/cpu_locks.sh@22 -- # lslocks -p 60467 00:12:18.858 20:04:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:19.117 20:04:01 -- event/cpu_locks.sh@73 -- # killprocess 60467 00:12:19.117 20:04:01 -- common/autotest_common.sh@936 -- # '[' -z 60467 ']' 00:12:19.117 20:04:01 -- common/autotest_common.sh@940 -- # kill -0 60467 00:12:19.118 20:04:01 -- common/autotest_common.sh@941 -- # uname 00:12:19.118 20:04:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:19.118 20:04:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60467 00:12:19.118 20:04:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:19.118 20:04:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:19.118 killing process with pid 60467 00:12:19.118 20:04:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60467' 00:12:19.118 20:04:01 -- common/autotest_common.sh@955 -- # kill 60467 00:12:19.118 20:04:01 -- common/autotest_common.sh@960 -- # wait 60467 00:12:19.687 00:12:19.687 real 0m1.737s 00:12:19.687 user 0m1.657s 00:12:19.687 sys 0m0.659s 00:12:19.687 20:04:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:19.687 20:04:01 -- common/autotest_common.sh@10 -- # set +x 00:12:19.687 ************************************ 00:12:19.687 END TEST default_locks_via_rpc 00:12:19.687 ************************************ 00:12:19.687 20:04:01 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:12:19.687 20:04:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:19.687 20:04:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.687 20:04:01 -- common/autotest_common.sh@10 -- # set +x 00:12:19.687 ************************************ 00:12:19.687 START TEST non_locking_app_on_locked_coremask 00:12:19.687 ************************************ 00:12:19.687 20:04:01 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:12:19.687 20:04:01 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60511 00:12:19.687 20:04:01 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:19.687 20:04:01 -- event/cpu_locks.sh@81 -- # waitforlisten 60511 /var/tmp/spdk.sock 00:12:19.687 20:04:01 -- common/autotest_common.sh@817 -- # '[' -z 60511 ']' 00:12:19.687 20:04:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.687 20:04:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:19.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.687 20:04:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.687 20:04:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:19.687 20:04:01 -- common/autotest_common.sh@10 -- # set +x 00:12:19.687 [2024-04-24 20:04:01.908859] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:19.687 [2024-04-24 20:04:01.908918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60511 ] 00:12:19.946 [2024-04-24 20:04:02.046371] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.946 [2024-04-24 20:04:02.144123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.514 20:04:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:20.514 20:04:02 -- common/autotest_common.sh@850 -- # return 0 00:12:20.514 20:04:02 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60527 00:12:20.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:20.514 20:04:02 -- event/cpu_locks.sh@85 -- # waitforlisten 60527 /var/tmp/spdk2.sock 00:12:20.514 20:04:02 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:12:20.514 20:04:02 -- common/autotest_common.sh@817 -- # '[' -z 60527 ']' 00:12:20.514 20:04:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:20.514 20:04:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:20.514 20:04:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:20.514 20:04:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:20.514 20:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:20.780 [2024-04-24 20:04:02.812760] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:20.780 [2024-04-24 20:04:02.813282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60527 ] 00:12:20.780 [2024-04-24 20:04:02.945688] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:20.780 [2024-04-24 20:04:02.945727] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.042 [2024-04-24 20:04:03.147753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.611 20:04:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:21.611 20:04:03 -- common/autotest_common.sh@850 -- # return 0 00:12:21.611 20:04:03 -- event/cpu_locks.sh@87 -- # locks_exist 60511 00:12:21.611 20:04:03 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:21.611 20:04:03 -- event/cpu_locks.sh@22 -- # lslocks -p 60511 00:12:21.870 20:04:04 -- event/cpu_locks.sh@89 -- # killprocess 60511 00:12:21.870 20:04:04 -- common/autotest_common.sh@936 -- # '[' -z 60511 ']' 00:12:21.870 20:04:04 -- common/autotest_common.sh@940 -- # kill -0 60511 00:12:21.870 20:04:04 -- common/autotest_common.sh@941 -- # uname 00:12:21.870 20:04:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:21.870 20:04:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60511 00:12:21.870 killing process with pid 60511 00:12:21.870 20:04:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:21.870 20:04:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:21.870 20:04:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60511' 00:12:21.870 20:04:04 -- common/autotest_common.sh@955 -- # kill 60511 00:12:21.870 20:04:04 -- common/autotest_common.sh@960 -- # wait 60511 00:12:22.807 20:04:04 -- event/cpu_locks.sh@90 -- # killprocess 60527 00:12:22.807 20:04:04 -- common/autotest_common.sh@936 -- # '[' -z 60527 ']' 00:12:22.807 20:04:04 -- common/autotest_common.sh@940 -- # kill -0 60527 00:12:22.807 20:04:04 -- common/autotest_common.sh@941 -- # uname 00:12:22.807 20:04:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:22.807 20:04:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60527 00:12:22.807 killing process with pid 60527 00:12:22.807 20:04:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:22.807 20:04:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:22.807 20:04:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60527' 00:12:22.807 20:04:04 -- common/autotest_common.sh@955 -- # kill 60527 00:12:22.807 20:04:04 -- common/autotest_common.sh@960 -- # wait 60527 00:12:23.208 ************************************ 00:12:23.208 END TEST non_locking_app_on_locked_coremask 00:12:23.208 ************************************ 00:12:23.208 00:12:23.208 real 0m3.366s 00:12:23.208 user 0m3.629s 00:12:23.208 sys 0m0.843s 00:12:23.208 20:04:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:23.208 20:04:05 -- common/autotest_common.sh@10 -- # set +x 00:12:23.208 20:04:05 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:12:23.208 20:04:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:23.208 20:04:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:23.208 20:04:05 -- common/autotest_common.sh@10 -- # set +x 00:12:23.208 ************************************ 00:12:23.208 START TEST locking_app_on_unlocked_coremask 00:12:23.208 ************************************ 00:12:23.208 20:04:05 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:12:23.208 20:04:05 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60598 00:12:23.208 20:04:05 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:12:23.208 20:04:05 -- event/cpu_locks.sh@99 -- # waitforlisten 60598 /var/tmp/spdk.sock 00:12:23.208 20:04:05 -- common/autotest_common.sh@817 -- # '[' -z 60598 ']' 00:12:23.208 20:04:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.208 20:04:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:23.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.208 20:04:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.208 20:04:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:23.208 20:04:05 -- common/autotest_common.sh@10 -- # set +x 00:12:23.208 [2024-04-24 20:04:05.410187] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:23.208 [2024-04-24 20:04:05.410272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60598 ] 00:12:23.481 [2024-04-24 20:04:05.545953] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:23.481 [2024-04-24 20:04:05.546012] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.481 [2024-04-24 20:04:05.639596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.050 20:04:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:24.050 20:04:06 -- common/autotest_common.sh@850 -- # return 0 00:12:24.050 20:04:06 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60614 00:12:24.050 20:04:06 -- event/cpu_locks.sh@103 -- # waitforlisten 60614 /var/tmp/spdk2.sock 00:12:24.050 20:04:06 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:24.050 20:04:06 -- common/autotest_common.sh@817 -- # '[' -z 60614 ']' 00:12:24.050 20:04:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:24.050 20:04:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:24.050 20:04:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:24.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:24.050 20:04:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:24.050 20:04:06 -- common/autotest_common.sh@10 -- # set +x 00:12:24.050 [2024-04-24 20:04:06.299562] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:24.050 [2024-04-24 20:04:06.299718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60614 ] 00:12:24.361 [2024-04-24 20:04:06.431045] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.619 [2024-04-24 20:04:06.622235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.187 20:04:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:25.187 20:04:07 -- common/autotest_common.sh@850 -- # return 0 00:12:25.187 20:04:07 -- event/cpu_locks.sh@105 -- # locks_exist 60614 00:12:25.187 20:04:07 -- event/cpu_locks.sh@22 -- # lslocks -p 60614 00:12:25.187 20:04:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:25.754 20:04:07 -- event/cpu_locks.sh@107 -- # killprocess 60598 00:12:25.754 20:04:07 -- common/autotest_common.sh@936 -- # '[' -z 60598 ']' 00:12:25.754 20:04:07 -- common/autotest_common.sh@940 -- # kill -0 60598 00:12:25.754 20:04:07 -- common/autotest_common.sh@941 -- # uname 00:12:25.754 20:04:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:25.754 20:04:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60598 00:12:25.754 killing process with pid 60598 00:12:25.754 20:04:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:25.754 20:04:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:25.754 20:04:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60598' 00:12:25.755 20:04:07 -- common/autotest_common.sh@955 -- # kill 60598 00:12:25.755 20:04:07 -- common/autotest_common.sh@960 -- # wait 60598 00:12:26.687 20:04:08 -- event/cpu_locks.sh@108 -- # killprocess 60614 00:12:26.687 20:04:08 -- common/autotest_common.sh@936 -- # '[' -z 60614 ']' 00:12:26.687 20:04:08 -- common/autotest_common.sh@940 -- # kill -0 60614 00:12:26.687 20:04:08 -- common/autotest_common.sh@941 -- # uname 00:12:26.687 20:04:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:26.687 20:04:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60614 00:12:26.687 killing process with pid 60614 00:12:26.687 20:04:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:26.687 20:04:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:26.687 20:04:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60614' 00:12:26.687 20:04:08 -- common/autotest_common.sh@955 -- # kill 60614 00:12:26.687 20:04:08 -- common/autotest_common.sh@960 -- # wait 60614 00:12:26.945 00:12:26.945 real 0m3.623s 00:12:26.945 user 0m3.872s 00:12:26.945 sys 0m0.978s 00:12:26.945 20:04:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:26.945 20:04:08 -- common/autotest_common.sh@10 -- # set +x 00:12:26.945 ************************************ 00:12:26.945 END TEST locking_app_on_unlocked_coremask 00:12:26.945 ************************************ 00:12:26.945 20:04:09 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:26.945 20:04:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:26.945 20:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:26.945 20:04:09 -- common/autotest_common.sh@10 -- # set +x 00:12:26.945 ************************************ 00:12:26.945 START TEST locking_app_on_locked_coremask 00:12:26.945 ************************************ 00:12:26.945 20:04:09 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:12:26.945 20:04:09 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60680 00:12:26.945 20:04:09 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:26.945 20:04:09 -- event/cpu_locks.sh@116 -- # waitforlisten 60680 /var/tmp/spdk.sock 00:12:26.945 20:04:09 -- common/autotest_common.sh@817 -- # '[' -z 60680 ']' 00:12:26.945 20:04:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.945 20:04:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:26.945 20:04:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.945 20:04:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:26.945 20:04:09 -- common/autotest_common.sh@10 -- # set +x 00:12:26.945 [2024-04-24 20:04:09.177720] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:26.945 [2024-04-24 20:04:09.177785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60680 ] 00:12:27.203 [2024-04-24 20:04:09.314790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.203 [2024-04-24 20:04:09.405879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.770 20:04:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:27.770 20:04:09 -- common/autotest_common.sh@850 -- # return 0 00:12:27.770 20:04:09 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60690 00:12:27.770 20:04:09 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60690 /var/tmp/spdk2.sock 00:12:27.770 20:04:09 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:27.770 20:04:09 -- common/autotest_common.sh@638 -- # local es=0 00:12:27.770 20:04:09 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 60690 /var/tmp/spdk2.sock 00:12:27.770 20:04:09 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:12:27.770 20:04:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:27.770 20:04:09 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:12:27.770 20:04:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:27.770 20:04:09 -- common/autotest_common.sh@641 -- # waitforlisten 60690 /var/tmp/spdk2.sock 00:12:27.770 20:04:09 -- common/autotest_common.sh@817 -- # '[' -z 60690 ']' 00:12:27.770 20:04:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:27.770 20:04:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:27.770 20:04:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:27.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:27.770 20:04:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:27.770 20:04:09 -- common/autotest_common.sh@10 -- # set +x 00:12:28.027 [2024-04-24 20:04:10.039241] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:28.027 [2024-04-24 20:04:10.039416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60690 ] 00:12:28.027 [2024-04-24 20:04:10.169118] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60680 has claimed it. 00:12:28.027 [2024-04-24 20:04:10.169204] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:28.594 ERROR: process (pid: 60690) is no longer running 00:12:28.594 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (60690) - No such process 00:12:28.594 20:04:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:28.594 20:04:10 -- common/autotest_common.sh@850 -- # return 1 00:12:28.594 20:04:10 -- common/autotest_common.sh@641 -- # es=1 00:12:28.594 20:04:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:28.594 20:04:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:28.594 20:04:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:28.594 20:04:10 -- event/cpu_locks.sh@122 -- # locks_exist 60680 00:12:28.594 20:04:10 -- event/cpu_locks.sh@22 -- # lslocks -p 60680 00:12:28.594 20:04:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:28.852 20:04:11 -- event/cpu_locks.sh@124 -- # killprocess 60680 00:12:28.852 20:04:11 -- common/autotest_common.sh@936 -- # '[' -z 60680 ']' 00:12:28.852 20:04:11 -- common/autotest_common.sh@940 -- # kill -0 60680 00:12:29.111 20:04:11 -- common/autotest_common.sh@941 -- # uname 00:12:29.111 20:04:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:29.111 20:04:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60680 00:12:29.111 20:04:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:29.111 20:04:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:29.111 20:04:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60680' 00:12:29.111 killing process with pid 60680 00:12:29.111 20:04:11 -- common/autotest_common.sh@955 -- # kill 60680 00:12:29.111 20:04:11 -- common/autotest_common.sh@960 -- # wait 60680 00:12:29.370 00:12:29.370 real 0m2.366s 00:12:29.370 user 0m2.580s 00:12:29.370 sys 0m0.584s 00:12:29.370 20:04:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:29.370 20:04:11 -- common/autotest_common.sh@10 -- # set +x 00:12:29.370 ************************************ 00:12:29.370 END TEST locking_app_on_locked_coremask 00:12:29.370 ************************************ 00:12:29.370 20:04:11 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:29.370 20:04:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:29.370 20:04:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:29.370 20:04:11 -- common/autotest_common.sh@10 -- # set +x 00:12:29.648 ************************************ 00:12:29.648 START TEST locking_overlapped_coremask 00:12:29.648 ************************************ 00:12:29.648 20:04:11 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:12:29.648 20:04:11 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60745 00:12:29.648 20:04:11 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:12:29.648 20:04:11 -- event/cpu_locks.sh@133 -- # waitforlisten 60745 /var/tmp/spdk.sock 00:12:29.648 20:04:11 -- common/autotest_common.sh@817 -- # '[' -z 60745 ']' 00:12:29.648 20:04:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.648 20:04:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:29.648 20:04:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.648 20:04:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:29.648 20:04:11 -- common/autotest_common.sh@10 -- # set +x 00:12:29.648 [2024-04-24 20:04:11.685912] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:29.648 [2024-04-24 20:04:11.685976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60745 ] 00:12:29.648 [2024-04-24 20:04:11.824657] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:29.920 [2024-04-24 20:04:11.927529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.920 [2024-04-24 20:04:11.927719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.920 [2024-04-24 20:04:11.927722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.490 20:04:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:30.490 20:04:12 -- common/autotest_common.sh@850 -- # return 0 00:12:30.490 20:04:12 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60762 00:12:30.490 20:04:12 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:30.490 20:04:12 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60762 /var/tmp/spdk2.sock 00:12:30.490 20:04:12 -- common/autotest_common.sh@638 -- # local es=0 00:12:30.490 20:04:12 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 60762 /var/tmp/spdk2.sock 00:12:30.490 20:04:12 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:12:30.490 20:04:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:30.490 20:04:12 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:12:30.490 20:04:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:30.490 20:04:12 -- common/autotest_common.sh@641 -- # waitforlisten 60762 /var/tmp/spdk2.sock 00:12:30.490 20:04:12 -- common/autotest_common.sh@817 -- # '[' -z 60762 ']' 00:12:30.490 20:04:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:30.490 20:04:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:30.490 20:04:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:30.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:30.490 20:04:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:30.490 20:04:12 -- common/autotest_common.sh@10 -- # set +x 00:12:30.490 [2024-04-24 20:04:12.560427] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:30.490 [2024-04-24 20:04:12.560565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60762 ] 00:12:30.490 [2024-04-24 20:04:12.690257] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60745 has claimed it. 00:12:30.490 [2024-04-24 20:04:12.690331] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:31.060 ERROR: process (pid: 60762) is no longer running 00:12:31.060 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (60762) - No such process 00:12:31.060 20:04:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:31.060 20:04:13 -- common/autotest_common.sh@850 -- # return 1 00:12:31.060 20:04:13 -- common/autotest_common.sh@641 -- # es=1 00:12:31.060 20:04:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:31.060 20:04:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:31.060 20:04:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:31.060 20:04:13 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:31.060 20:04:13 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:31.060 20:04:13 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:31.060 20:04:13 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:31.060 20:04:13 -- event/cpu_locks.sh@141 -- # killprocess 60745 00:12:31.060 20:04:13 -- common/autotest_common.sh@936 -- # '[' -z 60745 ']' 00:12:31.060 20:04:13 -- common/autotest_common.sh@940 -- # kill -0 60745 00:12:31.060 20:04:13 -- common/autotest_common.sh@941 -- # uname 00:12:31.060 20:04:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:31.060 20:04:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60745 00:12:31.060 20:04:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:31.060 20:04:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:31.060 20:04:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60745' 00:12:31.060 killing process with pid 60745 00:12:31.060 20:04:13 -- common/autotest_common.sh@955 -- # kill 60745 00:12:31.060 20:04:13 -- common/autotest_common.sh@960 -- # wait 60745 00:12:31.631 00:12:31.631 real 0m1.948s 00:12:31.631 user 0m5.189s 00:12:31.631 sys 0m0.337s 00:12:31.631 20:04:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:31.631 20:04:13 -- common/autotest_common.sh@10 -- # set +x 00:12:31.631 ************************************ 00:12:31.631 END TEST locking_overlapped_coremask 00:12:31.631 ************************************ 00:12:31.631 20:04:13 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:31.631 20:04:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:31.631 20:04:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.631 20:04:13 -- common/autotest_common.sh@10 -- # set +x 00:12:31.631 ************************************ 00:12:31.631 START TEST locking_overlapped_coremask_via_rpc 00:12:31.631 ************************************ 00:12:31.631 20:04:13 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:12:31.631 20:04:13 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60807 00:12:31.631 20:04:13 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:31.631 20:04:13 -- event/cpu_locks.sh@149 -- # waitforlisten 60807 /var/tmp/spdk.sock 00:12:31.631 20:04:13 -- common/autotest_common.sh@817 -- # '[' -z 60807 ']' 00:12:31.631 20:04:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.631 20:04:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:31.631 20:04:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.631 20:04:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:31.631 20:04:13 -- common/autotest_common.sh@10 -- # set +x 00:12:31.631 [2024-04-24 20:04:13.772598] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:31.631 [2024-04-24 20:04:13.772786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60807 ] 00:12:31.891 [2024-04-24 20:04:13.894036] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:31.891 [2024-04-24 20:04:13.894204] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:31.891 [2024-04-24 20:04:13.993419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.891 [2024-04-24 20:04:13.993549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.891 [2024-04-24 20:04:13.993552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.461 20:04:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:32.461 20:04:14 -- common/autotest_common.sh@850 -- # return 0 00:12:32.461 20:04:14 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:32.461 20:04:14 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60824 00:12:32.461 20:04:14 -- event/cpu_locks.sh@153 -- # waitforlisten 60824 /var/tmp/spdk2.sock 00:12:32.461 20:04:14 -- common/autotest_common.sh@817 -- # '[' -z 60824 ']' 00:12:32.461 20:04:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:32.461 20:04:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:32.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:32.461 20:04:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:32.461 20:04:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:32.461 20:04:14 -- common/autotest_common.sh@10 -- # set +x 00:12:32.461 [2024-04-24 20:04:14.652885] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:32.461 [2024-04-24 20:04:14.652941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60824 ] 00:12:32.720 [2024-04-24 20:04:14.782297] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:32.720 [2024-04-24 20:04:14.782344] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:32.979 [2024-04-24 20:04:14.979592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.979 [2024-04-24 20:04:14.979786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.979 [2024-04-24 20:04:14.979790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:33.549 20:04:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:33.549 20:04:15 -- common/autotest_common.sh@850 -- # return 0 00:12:33.549 20:04:15 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:33.549 20:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.549 20:04:15 -- common/autotest_common.sh@10 -- # set +x 00:12:33.549 20:04:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.549 20:04:15 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:33.549 20:04:15 -- common/autotest_common.sh@638 -- # local es=0 00:12:33.549 20:04:15 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:33.549 20:04:15 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:12:33.549 20:04:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:33.549 20:04:15 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:12:33.549 20:04:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:33.549 20:04:15 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:33.549 20:04:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.549 20:04:15 -- common/autotest_common.sh@10 -- # set +x 00:12:33.549 [2024-04-24 20:04:15.561523] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60807 has claimed it. 00:12:33.549 request: 00:12:33.549 { 00:12:33.549 "method": "framework_enable_cpumask_locks", 00:12:33.549 "req_id": 1 00:12:33.549 } 00:12:33.549 Got JSON-RPC error response 00:12:33.549 response: 00:12:33.549 { 00:12:33.549 "code": -32603, 00:12:33.549 "message": "Failed to claim CPU core: 2" 00:12:33.549 } 00:12:33.549 20:04:15 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:12:33.549 20:04:15 -- common/autotest_common.sh@641 -- # es=1 00:12:33.549 20:04:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:33.549 20:04:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:33.549 20:04:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:33.549 20:04:15 -- event/cpu_locks.sh@158 -- # waitforlisten 60807 /var/tmp/spdk.sock 00:12:33.549 20:04:15 -- common/autotest_common.sh@817 -- # '[' -z 60807 ']' 00:12:33.549 20:04:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.549 20:04:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:33.549 20:04:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.549 20:04:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:33.549 20:04:15 -- common/autotest_common.sh@10 -- # set +x 00:12:33.809 20:04:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:33.809 20:04:15 -- common/autotest_common.sh@850 -- # return 0 00:12:33.809 20:04:15 -- event/cpu_locks.sh@159 -- # waitforlisten 60824 /var/tmp/spdk2.sock 00:12:33.809 20:04:15 -- common/autotest_common.sh@817 -- # '[' -z 60824 ']' 00:12:33.809 20:04:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:33.809 20:04:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:33.809 20:04:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:33.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:33.809 20:04:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:33.809 20:04:15 -- common/autotest_common.sh@10 -- # set +x 00:12:34.068 20:04:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:34.068 20:04:16 -- common/autotest_common.sh@850 -- # return 0 00:12:34.068 20:04:16 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:34.068 20:04:16 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:34.068 20:04:16 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:34.068 20:04:16 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:34.068 00:12:34.068 real 0m2.360s 00:12:34.068 user 0m1.097s 00:12:34.068 sys 0m0.178s 00:12:34.068 20:04:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:34.068 20:04:16 -- common/autotest_common.sh@10 -- # set +x 00:12:34.068 ************************************ 00:12:34.068 END TEST locking_overlapped_coremask_via_rpc 00:12:34.068 ************************************ 00:12:34.068 20:04:16 -- event/cpu_locks.sh@174 -- # cleanup 00:12:34.068 20:04:16 -- event/cpu_locks.sh@15 -- # [[ -z 60807 ]] 00:12:34.068 20:04:16 -- event/cpu_locks.sh@15 -- # killprocess 60807 00:12:34.068 20:04:16 -- common/autotest_common.sh@936 -- # '[' -z 60807 ']' 00:12:34.068 20:04:16 -- common/autotest_common.sh@940 -- # kill -0 60807 00:12:34.068 20:04:16 -- common/autotest_common.sh@941 -- # uname 00:12:34.068 20:04:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:34.068 20:04:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60807 00:12:34.068 killing process with pid 60807 00:12:34.068 20:04:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:34.068 20:04:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:34.068 20:04:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60807' 00:12:34.068 20:04:16 -- common/autotest_common.sh@955 -- # kill 60807 00:12:34.068 20:04:16 -- common/autotest_common.sh@960 -- # wait 60807 00:12:34.327 20:04:16 -- event/cpu_locks.sh@16 -- # [[ -z 60824 ]] 00:12:34.327 20:04:16 -- event/cpu_locks.sh@16 -- # killprocess 60824 00:12:34.327 20:04:16 -- common/autotest_common.sh@936 -- # '[' -z 60824 ']' 00:12:34.327 20:04:16 -- common/autotest_common.sh@940 -- # kill -0 60824 00:12:34.327 20:04:16 -- common/autotest_common.sh@941 -- # uname 00:12:34.327 20:04:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:34.327 20:04:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60824 00:12:34.327 killing process with pid 60824 00:12:34.327 20:04:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:34.327 20:04:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:34.327 20:04:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60824' 00:12:34.327 20:04:16 -- common/autotest_common.sh@955 -- # kill 60824 00:12:34.327 20:04:16 -- common/autotest_common.sh@960 -- # wait 60824 00:12:34.897 20:04:16 -- event/cpu_locks.sh@18 -- # rm -f 00:12:34.897 Process with pid 60807 is not found 00:12:34.897 Process with pid 60824 is not found 00:12:34.897 20:04:16 -- event/cpu_locks.sh@1 -- # cleanup 00:12:34.897 20:04:16 -- event/cpu_locks.sh@15 -- # [[ -z 60807 ]] 00:12:34.897 20:04:16 -- event/cpu_locks.sh@15 -- # killprocess 60807 00:12:34.897 20:04:16 -- common/autotest_common.sh@936 -- # '[' -z 60807 ']' 00:12:34.897 20:04:16 -- common/autotest_common.sh@940 -- # kill -0 60807 00:12:34.897 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (60807) - No such process 00:12:34.897 20:04:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 60807 is not found' 00:12:34.897 20:04:16 -- event/cpu_locks.sh@16 -- # [[ -z 60824 ]] 00:12:34.897 20:04:16 -- event/cpu_locks.sh@16 -- # killprocess 60824 00:12:34.897 20:04:16 -- common/autotest_common.sh@936 -- # '[' -z 60824 ']' 00:12:34.897 20:04:16 -- common/autotest_common.sh@940 -- # kill -0 60824 00:12:34.897 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (60824) - No such process 00:12:34.897 20:04:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 60824 is not found' 00:12:34.897 20:04:16 -- event/cpu_locks.sh@18 -- # rm -f 00:12:34.897 00:12:34.897 real 0m19.187s 00:12:34.897 user 0m31.545s 00:12:34.897 sys 0m5.241s 00:12:34.897 20:04:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:34.897 20:04:16 -- common/autotest_common.sh@10 -- # set +x 00:12:34.897 ************************************ 00:12:34.897 END TEST cpu_locks 00:12:34.897 ************************************ 00:12:34.897 00:12:34.897 real 0m48.000s 00:12:34.897 user 1m30.727s 00:12:34.897 sys 0m8.896s 00:12:34.897 20:04:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:34.897 20:04:16 -- common/autotest_common.sh@10 -- # set +x 00:12:34.897 ************************************ 00:12:34.897 END TEST event 00:12:34.897 ************************************ 00:12:34.897 20:04:17 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:34.897 20:04:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:34.897 20:04:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:34.897 20:04:17 -- common/autotest_common.sh@10 -- # set +x 00:12:34.897 ************************************ 00:12:34.897 START TEST thread 00:12:34.897 ************************************ 00:12:34.897 20:04:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:35.157 * Looking for test storage... 00:12:35.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:35.157 20:04:17 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:35.157 20:04:17 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:12:35.157 20:04:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:35.157 20:04:17 -- common/autotest_common.sh@10 -- # set +x 00:12:35.157 ************************************ 00:12:35.157 START TEST thread_poller_perf 00:12:35.157 ************************************ 00:12:35.157 20:04:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:35.157 [2024-04-24 20:04:17.379335] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:35.157 [2024-04-24 20:04:17.379595] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60959 ] 00:12:35.418 [2024-04-24 20:04:17.522602] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.418 [2024-04-24 20:04:17.619756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.418 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:36.808 ====================================== 00:12:36.808 busy:2296623838 (cyc) 00:12:36.808 total_run_count: 391000 00:12:36.808 tsc_hz: 2290000000 (cyc) 00:12:36.808 ====================================== 00:12:36.808 poller_cost: 5873 (cyc), 2564 (nsec) 00:12:36.808 00:12:36.808 ************************************ 00:12:36.808 END TEST thread_poller_perf 00:12:36.808 ************************************ 00:12:36.808 real 0m1.375s 00:12:36.808 user 0m1.222s 00:12:36.808 sys 0m0.046s 00:12:36.808 20:04:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:36.808 20:04:18 -- common/autotest_common.sh@10 -- # set +x 00:12:36.808 20:04:18 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:36.808 20:04:18 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:12:36.808 20:04:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:36.808 20:04:18 -- common/autotest_common.sh@10 -- # set +x 00:12:36.808 ************************************ 00:12:36.808 START TEST thread_poller_perf 00:12:36.808 ************************************ 00:12:36.808 20:04:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:36.808 [2024-04-24 20:04:18.897265] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:36.808 [2024-04-24 20:04:18.897350] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60997 ] 00:12:36.808 [2024-04-24 20:04:19.039012] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.067 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:37.067 [2024-04-24 20:04:19.131396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.004 ====================================== 00:12:38.004 busy:2291867832 (cyc) 00:12:38.004 total_run_count: 5043000 00:12:38.004 tsc_hz: 2290000000 (cyc) 00:12:38.004 ====================================== 00:12:38.004 poller_cost: 454 (cyc), 198 (nsec) 00:12:38.004 00:12:38.004 real 0m1.361s 00:12:38.004 user 0m1.204s 00:12:38.004 sys 0m0.050s 00:12:38.004 20:04:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:38.004 ************************************ 00:12:38.004 END TEST thread_poller_perf 00:12:38.004 ************************************ 00:12:38.004 20:04:20 -- common/autotest_common.sh@10 -- # set +x 00:12:38.264 20:04:20 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:38.264 ************************************ 00:12:38.264 END TEST thread 00:12:38.264 ************************************ 00:12:38.264 00:12:38.264 real 0m3.154s 00:12:38.264 user 0m2.587s 00:12:38.264 sys 0m0.332s 00:12:38.264 20:04:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:38.264 20:04:20 -- common/autotest_common.sh@10 -- # set +x 00:12:38.264 20:04:20 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:38.264 20:04:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:38.264 20:04:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:38.264 20:04:20 -- common/autotest_common.sh@10 -- # set +x 00:12:38.264 ************************************ 00:12:38.264 START TEST accel 00:12:38.264 ************************************ 00:12:38.264 20:04:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:38.524 * Looking for test storage... 00:12:38.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:38.524 20:04:20 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:12:38.524 20:04:20 -- accel/accel.sh@82 -- # get_expected_opcs 00:12:38.524 20:04:20 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:38.524 20:04:20 -- accel/accel.sh@62 -- # spdk_tgt_pid=61078 00:12:38.524 20:04:20 -- accel/accel.sh@63 -- # waitforlisten 61078 00:12:38.524 20:04:20 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:12:38.524 20:04:20 -- accel/accel.sh@61 -- # build_accel_config 00:12:38.524 20:04:20 -- common/autotest_common.sh@817 -- # '[' -z 61078 ']' 00:12:38.524 20:04:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.524 20:04:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:38.524 20:04:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:38.524 20:04:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.524 20:04:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:38.524 20:04:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:38.524 20:04:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:38.524 20:04:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:38.524 20:04:20 -- common/autotest_common.sh@10 -- # set +x 00:12:38.524 20:04:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:38.524 20:04:20 -- accel/accel.sh@40 -- # local IFS=, 00:12:38.524 20:04:20 -- accel/accel.sh@41 -- # jq -r . 00:12:38.524 [2024-04-24 20:04:20.601745] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:38.524 [2024-04-24 20:04:20.601914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61078 ] 00:12:38.524 [2024-04-24 20:04:20.739761] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.783 [2024-04-24 20:04:20.844106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.352 20:04:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:39.352 20:04:21 -- common/autotest_common.sh@850 -- # return 0 00:12:39.352 20:04:21 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:12:39.352 20:04:21 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:12:39.352 20:04:21 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:12:39.352 20:04:21 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:12:39.352 20:04:21 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:12:39.352 20:04:21 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:12:39.352 20:04:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.352 20:04:21 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:12:39.352 20:04:21 -- common/autotest_common.sh@10 -- # set +x 00:12:39.352 20:04:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.352 20:04:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # IFS== 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # read -r opc module 00:12:39.352 20:04:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:39.352 20:04:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # IFS== 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # read -r opc module 00:12:39.352 20:04:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:39.352 20:04:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # IFS== 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # read -r opc module 00:12:39.352 20:04:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:39.352 20:04:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # IFS== 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # read -r opc module 00:12:39.352 20:04:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:39.352 20:04:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # IFS== 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # read -r opc module 00:12:39.352 20:04:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:39.352 20:04:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # IFS== 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # read -r opc module 00:12:39.352 20:04:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:39.352 20:04:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # IFS== 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # read -r opc module 00:12:39.352 20:04:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:39.352 20:04:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # IFS== 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # read -r opc module 00:12:39.352 20:04:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:39.352 20:04:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # IFS== 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # read -r opc module 00:12:39.352 20:04:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:39.352 20:04:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # IFS== 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # read -r opc module 00:12:39.352 20:04:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:39.352 20:04:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # IFS== 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # read -r opc module 00:12:39.352 20:04:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:39.352 20:04:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # IFS== 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # read -r opc module 00:12:39.352 20:04:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:39.352 20:04:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # IFS== 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # read -r opc module 00:12:39.352 20:04:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:39.352 20:04:21 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # IFS== 00:12:39.352 20:04:21 -- accel/accel.sh@72 -- # read -r opc module 00:12:39.352 20:04:21 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:39.352 20:04:21 -- accel/accel.sh@75 -- # killprocess 61078 00:12:39.352 20:04:21 -- common/autotest_common.sh@936 -- # '[' -z 61078 ']' 00:12:39.352 20:04:21 -- common/autotest_common.sh@940 -- # kill -0 61078 00:12:39.352 20:04:21 -- common/autotest_common.sh@941 -- # uname 00:12:39.352 20:04:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:39.352 20:04:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61078 00:12:39.352 20:04:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:39.352 20:04:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:39.352 killing process with pid 61078 00:12:39.352 20:04:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61078' 00:12:39.352 20:04:21 -- common/autotest_common.sh@955 -- # kill 61078 00:12:39.352 20:04:21 -- common/autotest_common.sh@960 -- # wait 61078 00:12:39.921 20:04:21 -- accel/accel.sh@76 -- # trap - ERR 00:12:39.921 20:04:21 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:12:39.921 20:04:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:39.921 20:04:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:39.921 20:04:21 -- common/autotest_common.sh@10 -- # set +x 00:12:39.921 20:04:22 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:12:39.922 20:04:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:12:39.922 20:04:22 -- accel/accel.sh@12 -- # build_accel_config 00:12:39.922 20:04:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:39.922 20:04:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:39.922 20:04:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:39.922 20:04:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:39.922 20:04:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:39.922 20:04:22 -- accel/accel.sh@40 -- # local IFS=, 00:12:39.922 20:04:22 -- accel/accel.sh@41 -- # jq -r . 00:12:39.922 20:04:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:39.922 20:04:22 -- common/autotest_common.sh@10 -- # set +x 00:12:39.922 20:04:22 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:12:39.922 20:04:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:39.922 20:04:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:39.922 20:04:22 -- common/autotest_common.sh@10 -- # set +x 00:12:40.181 ************************************ 00:12:40.181 START TEST accel_missing_filename 00:12:40.181 ************************************ 00:12:40.181 20:04:22 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:12:40.181 20:04:22 -- common/autotest_common.sh@638 -- # local es=0 00:12:40.181 20:04:22 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:12:40.181 20:04:22 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:40.181 20:04:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:40.181 20:04:22 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:40.181 20:04:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:40.181 20:04:22 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:12:40.181 20:04:22 -- accel/accel.sh@12 -- # build_accel_config 00:12:40.181 20:04:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:12:40.181 20:04:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:40.181 20:04:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:40.181 20:04:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:40.181 20:04:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:40.181 20:04:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:40.181 20:04:22 -- accel/accel.sh@40 -- # local IFS=, 00:12:40.181 20:04:22 -- accel/accel.sh@41 -- # jq -r . 00:12:40.181 [2024-04-24 20:04:22.284353] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:40.181 [2024-04-24 20:04:22.284470] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61138 ] 00:12:40.181 [2024-04-24 20:04:22.422991] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.441 [2024-04-24 20:04:22.525236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.441 [2024-04-24 20:04:22.568906] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:40.441 [2024-04-24 20:04:22.629665] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:12:40.703 A filename is required. 00:12:40.703 20:04:22 -- common/autotest_common.sh@641 -- # es=234 00:12:40.703 20:04:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:40.703 20:04:22 -- common/autotest_common.sh@650 -- # es=106 00:12:40.703 20:04:22 -- common/autotest_common.sh@651 -- # case "$es" in 00:12:40.703 20:04:22 -- common/autotest_common.sh@658 -- # es=1 00:12:40.703 ************************************ 00:12:40.703 END TEST accel_missing_filename 00:12:40.703 ************************************ 00:12:40.703 20:04:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:40.703 00:12:40.703 real 0m0.486s 00:12:40.703 user 0m0.329s 00:12:40.703 sys 0m0.101s 00:12:40.703 20:04:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:40.703 20:04:22 -- common/autotest_common.sh@10 -- # set +x 00:12:40.703 20:04:22 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:40.703 20:04:22 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:12:40.703 20:04:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:40.703 20:04:22 -- common/autotest_common.sh@10 -- # set +x 00:12:40.703 ************************************ 00:12:40.703 START TEST accel_compress_verify 00:12:40.703 ************************************ 00:12:40.703 20:04:22 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:40.703 20:04:22 -- common/autotest_common.sh@638 -- # local es=0 00:12:40.703 20:04:22 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:40.703 20:04:22 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:40.703 20:04:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:40.703 20:04:22 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:40.703 20:04:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:40.703 20:04:22 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:40.703 20:04:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:40.703 20:04:22 -- accel/accel.sh@12 -- # build_accel_config 00:12:40.703 20:04:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:40.703 20:04:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:40.703 20:04:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:40.703 20:04:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:40.703 20:04:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:40.703 20:04:22 -- accel/accel.sh@40 -- # local IFS=, 00:12:40.703 20:04:22 -- accel/accel.sh@41 -- # jq -r . 00:12:40.703 [2024-04-24 20:04:22.929336] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:40.703 [2024-04-24 20:04:22.929502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61167 ] 00:12:40.965 [2024-04-24 20:04:23.069883] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.965 [2024-04-24 20:04:23.173020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.965 [2024-04-24 20:04:23.216651] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:41.224 [2024-04-24 20:04:23.277099] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:12:41.224 00:12:41.224 Compression does not support the verify option, aborting. 00:12:41.224 20:04:23 -- common/autotest_common.sh@641 -- # es=161 00:12:41.224 20:04:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:41.224 20:04:23 -- common/autotest_common.sh@650 -- # es=33 00:12:41.224 20:04:23 -- common/autotest_common.sh@651 -- # case "$es" in 00:12:41.224 20:04:23 -- common/autotest_common.sh@658 -- # es=1 00:12:41.224 20:04:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:41.224 00:12:41.224 real 0m0.493s 00:12:41.224 user 0m0.333s 00:12:41.224 sys 0m0.102s 00:12:41.224 20:04:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:41.224 ************************************ 00:12:41.224 END TEST accel_compress_verify 00:12:41.224 20:04:23 -- common/autotest_common.sh@10 -- # set +x 00:12:41.224 ************************************ 00:12:41.224 20:04:23 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:12:41.224 20:04:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:41.224 20:04:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.224 20:04:23 -- common/autotest_common.sh@10 -- # set +x 00:12:41.486 ************************************ 00:12:41.486 START TEST accel_wrong_workload 00:12:41.486 ************************************ 00:12:41.486 20:04:23 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:12:41.486 20:04:23 -- common/autotest_common.sh@638 -- # local es=0 00:12:41.486 20:04:23 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:12:41.486 20:04:23 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:41.486 20:04:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:41.486 20:04:23 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:41.486 20:04:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:41.486 20:04:23 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:12:41.486 20:04:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:12:41.486 20:04:23 -- accel/accel.sh@12 -- # build_accel_config 00:12:41.486 20:04:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:41.486 20:04:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:41.486 20:04:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:41.486 20:04:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:41.486 20:04:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:41.486 20:04:23 -- accel/accel.sh@40 -- # local IFS=, 00:12:41.486 20:04:23 -- accel/accel.sh@41 -- # jq -r . 00:12:41.486 Unsupported workload type: foobar 00:12:41.486 [2024-04-24 20:04:23.574269] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:12:41.486 accel_perf options: 00:12:41.486 [-h help message] 00:12:41.486 [-q queue depth per core] 00:12:41.486 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:41.486 [-T number of threads per core 00:12:41.486 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:41.486 [-t time in seconds] 00:12:41.486 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:41.486 [ dif_verify, , dif_generate, dif_generate_copy 00:12:41.486 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:41.487 [-l for compress/decompress workloads, name of uncompressed input file 00:12:41.487 [-S for crc32c workload, use this seed value (default 0) 00:12:41.487 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:41.487 [-f for fill workload, use this BYTE value (default 255) 00:12:41.487 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:41.487 [-y verify result if this switch is on] 00:12:41.487 [-a tasks to allocate per core (default: same value as -q)] 00:12:41.487 Can be used to spread operations across a wider range of memory. 00:12:41.487 20:04:23 -- common/autotest_common.sh@641 -- # es=1 00:12:41.487 20:04:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:41.487 ************************************ 00:12:41.487 END TEST accel_wrong_workload 00:12:41.487 ************************************ 00:12:41.487 20:04:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:41.487 20:04:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:41.487 00:12:41.487 real 0m0.043s 00:12:41.487 user 0m0.026s 00:12:41.487 sys 0m0.016s 00:12:41.487 20:04:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:41.487 20:04:23 -- common/autotest_common.sh@10 -- # set +x 00:12:41.487 20:04:23 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:12:41.487 20:04:23 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:12:41.487 20:04:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.487 20:04:23 -- common/autotest_common.sh@10 -- # set +x 00:12:41.487 ************************************ 00:12:41.487 START TEST accel_negative_buffers 00:12:41.487 ************************************ 00:12:41.487 20:04:23 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:12:41.487 20:04:23 -- common/autotest_common.sh@638 -- # local es=0 00:12:41.487 20:04:23 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:12:41.487 20:04:23 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:12:41.487 20:04:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:41.487 20:04:23 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:12:41.749 20:04:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:41.749 20:04:23 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:12:41.749 20:04:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:12:41.749 20:04:23 -- accel/accel.sh@12 -- # build_accel_config 00:12:41.749 20:04:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:41.749 20:04:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:41.749 20:04:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:41.749 20:04:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:41.749 20:04:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:41.749 20:04:23 -- accel/accel.sh@40 -- # local IFS=, 00:12:41.749 20:04:23 -- accel/accel.sh@41 -- # jq -r . 00:12:41.749 -x option must be non-negative. 00:12:41.749 [2024-04-24 20:04:23.767067] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:12:41.749 accel_perf options: 00:12:41.749 [-h help message] 00:12:41.749 [-q queue depth per core] 00:12:41.749 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:41.749 [-T number of threads per core 00:12:41.749 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:41.749 [-t time in seconds] 00:12:41.749 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:41.749 [ dif_verify, , dif_generate, dif_generate_copy 00:12:41.749 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:41.749 [-l for compress/decompress workloads, name of uncompressed input file 00:12:41.749 [-S for crc32c workload, use this seed value (default 0) 00:12:41.749 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:41.749 [-f for fill workload, use this BYTE value (default 255) 00:12:41.749 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:41.749 [-y verify result if this switch is on] 00:12:41.749 [-a tasks to allocate per core (default: same value as -q)] 00:12:41.749 Can be used to spread operations across a wider range of memory. 00:12:41.749 20:04:23 -- common/autotest_common.sh@641 -- # es=1 00:12:41.749 20:04:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:41.749 20:04:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:41.749 20:04:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:41.749 00:12:41.749 real 0m0.039s 00:12:41.749 user 0m0.019s 00:12:41.749 sys 0m0.019s 00:12:41.749 20:04:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:41.749 20:04:23 -- common/autotest_common.sh@10 -- # set +x 00:12:41.749 ************************************ 00:12:41.749 END TEST accel_negative_buffers 00:12:41.749 ************************************ 00:12:41.750 20:04:23 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:12:41.750 20:04:23 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:41.750 20:04:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.750 20:04:23 -- common/autotest_common.sh@10 -- # set +x 00:12:41.750 ************************************ 00:12:41.750 START TEST accel_crc32c 00:12:41.750 ************************************ 00:12:41.750 20:04:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:12:41.750 20:04:23 -- accel/accel.sh@16 -- # local accel_opc 00:12:41.750 20:04:23 -- accel/accel.sh@17 -- # local accel_module 00:12:41.750 20:04:23 -- accel/accel.sh@19 -- # IFS=: 00:12:41.750 20:04:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:12:41.750 20:04:23 -- accel/accel.sh@19 -- # read -r var val 00:12:41.750 20:04:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:12:41.750 20:04:23 -- accel/accel.sh@12 -- # build_accel_config 00:12:41.750 20:04:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:41.750 20:04:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:41.750 20:04:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:41.750 20:04:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:41.750 20:04:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:41.750 20:04:23 -- accel/accel.sh@40 -- # local IFS=, 00:12:41.750 20:04:23 -- accel/accel.sh@41 -- # jq -r . 00:12:41.750 [2024-04-24 20:04:23.935677] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:41.750 [2024-04-24 20:04:23.935887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61243 ] 00:12:42.009 [2024-04-24 20:04:24.073198] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.009 [2024-04-24 20:04:24.178497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.009 20:04:24 -- accel/accel.sh@20 -- # val= 00:12:42.009 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:42.009 20:04:24 -- accel/accel.sh@20 -- # val= 00:12:42.009 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:42.009 20:04:24 -- accel/accel.sh@20 -- # val=0x1 00:12:42.009 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:42.009 20:04:24 -- accel/accel.sh@20 -- # val= 00:12:42.009 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:42.009 20:04:24 -- accel/accel.sh@20 -- # val= 00:12:42.009 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:42.009 20:04:24 -- accel/accel.sh@20 -- # val=crc32c 00:12:42.009 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.009 20:04:24 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:42.009 20:04:24 -- accel/accel.sh@20 -- # val=32 00:12:42.009 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:42.009 20:04:24 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:42.009 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:42.009 20:04:24 -- accel/accel.sh@20 -- # val= 00:12:42.009 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:42.009 20:04:24 -- accel/accel.sh@20 -- # val=software 00:12:42.009 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.009 20:04:24 -- accel/accel.sh@22 -- # accel_module=software 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:42.009 20:04:24 -- accel/accel.sh@20 -- # val=32 00:12:42.009 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:42.009 20:04:24 -- accel/accel.sh@20 -- # val=32 00:12:42.009 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:42.009 20:04:24 -- accel/accel.sh@20 -- # val=1 00:12:42.009 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.009 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.010 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:42.010 20:04:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:42.010 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.010 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.010 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:42.010 20:04:24 -- accel/accel.sh@20 -- # val=Yes 00:12:42.010 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.010 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.010 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:42.010 20:04:24 -- accel/accel.sh@20 -- # val= 00:12:42.010 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.010 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.010 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:42.010 20:04:24 -- accel/accel.sh@20 -- # val= 00:12:42.010 20:04:24 -- accel/accel.sh@21 -- # case "$var" in 00:12:42.010 20:04:24 -- accel/accel.sh@19 -- # IFS=: 00:12:42.010 20:04:24 -- accel/accel.sh@19 -- # read -r var val 00:12:43.387 20:04:25 -- accel/accel.sh@20 -- # val= 00:12:43.387 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.387 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.387 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.387 20:04:25 -- accel/accel.sh@20 -- # val= 00:12:43.387 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.387 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.387 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.387 20:04:25 -- accel/accel.sh@20 -- # val= 00:12:43.387 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.387 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.387 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.387 20:04:25 -- accel/accel.sh@20 -- # val= 00:12:43.387 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.387 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.387 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.387 20:04:25 -- accel/accel.sh@20 -- # val= 00:12:43.387 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.387 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.387 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.387 20:04:25 -- accel/accel.sh@20 -- # val= 00:12:43.387 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.387 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.387 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.387 20:04:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:43.387 20:04:25 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:43.387 20:04:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:43.387 00:12:43.387 real 0m1.490s 00:12:43.387 user 0m1.296s 00:12:43.387 sys 0m0.095s 00:12:43.387 20:04:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:43.387 20:04:25 -- common/autotest_common.sh@10 -- # set +x 00:12:43.387 ************************************ 00:12:43.387 END TEST accel_crc32c 00:12:43.387 ************************************ 00:12:43.387 20:04:25 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:12:43.387 20:04:25 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:43.387 20:04:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:43.387 20:04:25 -- common/autotest_common.sh@10 -- # set +x 00:12:43.387 ************************************ 00:12:43.387 START TEST accel_crc32c_C2 00:12:43.387 ************************************ 00:12:43.387 20:04:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:12:43.387 20:04:25 -- accel/accel.sh@16 -- # local accel_opc 00:12:43.387 20:04:25 -- accel/accel.sh@17 -- # local accel_module 00:12:43.387 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.387 20:04:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:12:43.387 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.387 20:04:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:12:43.387 20:04:25 -- accel/accel.sh@12 -- # build_accel_config 00:12:43.387 20:04:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:43.387 20:04:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:43.387 20:04:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:43.387 20:04:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:43.387 20:04:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:43.387 20:04:25 -- accel/accel.sh@40 -- # local IFS=, 00:12:43.387 20:04:25 -- accel/accel.sh@41 -- # jq -r . 00:12:43.387 [2024-04-24 20:04:25.560637] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:43.387 [2024-04-24 20:04:25.560757] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61288 ] 00:12:43.645 [2024-04-24 20:04:25.702333] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.646 [2024-04-24 20:04:25.790949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val= 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val= 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val=0x1 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val= 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val= 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val=crc32c 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val=0 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val= 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val=software 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@22 -- # accel_module=software 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val=32 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val=32 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val=1 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val=Yes 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val= 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:43.646 20:04:25 -- accel/accel.sh@20 -- # val= 00:12:43.646 20:04:25 -- accel/accel.sh@21 -- # case "$var" in 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # IFS=: 00:12:43.646 20:04:25 -- accel/accel.sh@19 -- # read -r var val 00:12:45.020 20:04:26 -- accel/accel.sh@20 -- # val= 00:12:45.020 20:04:26 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.020 20:04:26 -- accel/accel.sh@19 -- # IFS=: 00:12:45.020 20:04:26 -- accel/accel.sh@19 -- # read -r var val 00:12:45.020 20:04:26 -- accel/accel.sh@20 -- # val= 00:12:45.020 20:04:26 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.020 20:04:26 -- accel/accel.sh@19 -- # IFS=: 00:12:45.020 20:04:26 -- accel/accel.sh@19 -- # read -r var val 00:12:45.020 20:04:26 -- accel/accel.sh@20 -- # val= 00:12:45.020 20:04:26 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.020 20:04:26 -- accel/accel.sh@19 -- # IFS=: 00:12:45.020 20:04:26 -- accel/accel.sh@19 -- # read -r var val 00:12:45.020 20:04:26 -- accel/accel.sh@20 -- # val= 00:12:45.020 20:04:26 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.020 20:04:26 -- accel/accel.sh@19 -- # IFS=: 00:12:45.020 20:04:26 -- accel/accel.sh@19 -- # read -r var val 00:12:45.020 20:04:26 -- accel/accel.sh@20 -- # val= 00:12:45.020 20:04:26 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.020 20:04:26 -- accel/accel.sh@19 -- # IFS=: 00:12:45.020 20:04:26 -- accel/accel.sh@19 -- # read -r var val 00:12:45.020 20:04:26 -- accel/accel.sh@20 -- # val= 00:12:45.020 20:04:26 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.020 20:04:26 -- accel/accel.sh@19 -- # IFS=: 00:12:45.020 20:04:26 -- accel/accel.sh@19 -- # read -r var val 00:12:45.020 20:04:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:45.020 ************************************ 00:12:45.021 END TEST accel_crc32c_C2 00:12:45.021 ************************************ 00:12:45.021 20:04:26 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:45.021 20:04:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:45.021 00:12:45.021 real 0m1.470s 00:12:45.021 user 0m1.291s 00:12:45.021 sys 0m0.088s 00:12:45.021 20:04:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:45.021 20:04:26 -- common/autotest_common.sh@10 -- # set +x 00:12:45.021 20:04:27 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:12:45.021 20:04:27 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:45.021 20:04:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:45.021 20:04:27 -- common/autotest_common.sh@10 -- # set +x 00:12:45.021 ************************************ 00:12:45.021 START TEST accel_copy 00:12:45.021 ************************************ 00:12:45.021 20:04:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:12:45.021 20:04:27 -- accel/accel.sh@16 -- # local accel_opc 00:12:45.021 20:04:27 -- accel/accel.sh@17 -- # local accel_module 00:12:45.021 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.021 20:04:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:12:45.021 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:45.021 20:04:27 -- accel/accel.sh@12 -- # build_accel_config 00:12:45.021 20:04:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:12:45.021 20:04:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:45.021 20:04:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:45.021 20:04:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:45.021 20:04:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:45.021 20:04:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:45.021 20:04:27 -- accel/accel.sh@40 -- # local IFS=, 00:12:45.021 20:04:27 -- accel/accel.sh@41 -- # jq -r . 00:12:45.021 [2024-04-24 20:04:27.159715] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:45.021 [2024-04-24 20:04:27.159833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61321 ] 00:12:45.280 [2024-04-24 20:04:27.299731] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.280 [2024-04-24 20:04:27.396047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.280 20:04:27 -- accel/accel.sh@20 -- # val= 00:12:45.280 20:04:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.280 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.280 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:45.280 20:04:27 -- accel/accel.sh@20 -- # val= 00:12:45.280 20:04:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.280 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.280 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:45.280 20:04:27 -- accel/accel.sh@20 -- # val=0x1 00:12:45.280 20:04:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.280 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.280 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:45.280 20:04:27 -- accel/accel.sh@20 -- # val= 00:12:45.280 20:04:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.280 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.280 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:45.280 20:04:27 -- accel/accel.sh@20 -- # val= 00:12:45.280 20:04:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.280 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.280 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:45.280 20:04:27 -- accel/accel.sh@20 -- # val=copy 00:12:45.280 20:04:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.280 20:04:27 -- accel/accel.sh@23 -- # accel_opc=copy 00:12:45.280 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.280 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:45.281 20:04:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:45.281 20:04:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:45.281 20:04:27 -- accel/accel.sh@20 -- # val= 00:12:45.281 20:04:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:45.281 20:04:27 -- accel/accel.sh@20 -- # val=software 00:12:45.281 20:04:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.281 20:04:27 -- accel/accel.sh@22 -- # accel_module=software 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:45.281 20:04:27 -- accel/accel.sh@20 -- # val=32 00:12:45.281 20:04:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:45.281 20:04:27 -- accel/accel.sh@20 -- # val=32 00:12:45.281 20:04:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:45.281 20:04:27 -- accel/accel.sh@20 -- # val=1 00:12:45.281 20:04:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:45.281 20:04:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:45.281 20:04:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:45.281 20:04:27 -- accel/accel.sh@20 -- # val=Yes 00:12:45.281 20:04:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:45.281 20:04:27 -- accel/accel.sh@20 -- # val= 00:12:45.281 20:04:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:45.281 20:04:27 -- accel/accel.sh@20 -- # val= 00:12:45.281 20:04:27 -- accel/accel.sh@21 -- # case "$var" in 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # IFS=: 00:12:45.281 20:04:27 -- accel/accel.sh@19 -- # read -r var val 00:12:46.659 20:04:28 -- accel/accel.sh@20 -- # val= 00:12:46.659 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.659 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.659 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.659 20:04:28 -- accel/accel.sh@20 -- # val= 00:12:46.659 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.659 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.659 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.659 20:04:28 -- accel/accel.sh@20 -- # val= 00:12:46.659 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.659 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.659 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.659 20:04:28 -- accel/accel.sh@20 -- # val= 00:12:46.659 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.659 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.659 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.659 20:04:28 -- accel/accel.sh@20 -- # val= 00:12:46.659 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.659 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.659 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.659 20:04:28 -- accel/accel.sh@20 -- # val= 00:12:46.659 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.659 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.659 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.659 20:04:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:46.659 20:04:28 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:12:46.659 20:04:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:46.659 00:12:46.659 real 0m1.490s 00:12:46.659 user 0m1.307s 00:12:46.659 sys 0m0.092s 00:12:46.659 20:04:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:46.659 20:04:28 -- common/autotest_common.sh@10 -- # set +x 00:12:46.659 ************************************ 00:12:46.659 END TEST accel_copy 00:12:46.659 ************************************ 00:12:46.659 20:04:28 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:46.659 20:04:28 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:46.659 20:04:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:46.659 20:04:28 -- common/autotest_common.sh@10 -- # set +x 00:12:46.659 ************************************ 00:12:46.659 START TEST accel_fill 00:12:46.659 ************************************ 00:12:46.659 20:04:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:46.659 20:04:28 -- accel/accel.sh@16 -- # local accel_opc 00:12:46.659 20:04:28 -- accel/accel.sh@17 -- # local accel_module 00:12:46.659 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.659 20:04:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:46.659 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.659 20:04:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:46.659 20:04:28 -- accel/accel.sh@12 -- # build_accel_config 00:12:46.659 20:04:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:46.659 20:04:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:46.659 20:04:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:46.659 20:04:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:46.659 20:04:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:46.659 20:04:28 -- accel/accel.sh@40 -- # local IFS=, 00:12:46.659 20:04:28 -- accel/accel.sh@41 -- # jq -r . 00:12:46.659 [2024-04-24 20:04:28.701778] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:46.659 [2024-04-24 20:04:28.701844] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61366 ] 00:12:46.659 [2024-04-24 20:04:28.825857] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.917 [2024-04-24 20:04:28.946937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val= 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val= 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val=0x1 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val= 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val= 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val=fill 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@23 -- # accel_opc=fill 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val=0x80 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val= 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val=software 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@22 -- # accel_module=software 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val=64 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val=64 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val=1 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val=Yes 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val= 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:46.917 20:04:28 -- accel/accel.sh@20 -- # val= 00:12:46.917 20:04:28 -- accel/accel.sh@21 -- # case "$var" in 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # IFS=: 00:12:46.917 20:04:28 -- accel/accel.sh@19 -- # read -r var val 00:12:48.289 20:04:30 -- accel/accel.sh@20 -- # val= 00:12:48.289 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.289 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.289 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.289 20:04:30 -- accel/accel.sh@20 -- # val= 00:12:48.289 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.289 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.289 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.289 20:04:30 -- accel/accel.sh@20 -- # val= 00:12:48.289 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.289 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.289 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.289 20:04:30 -- accel/accel.sh@20 -- # val= 00:12:48.289 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.289 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.290 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.290 20:04:30 -- accel/accel.sh@20 -- # val= 00:12:48.290 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.290 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.290 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.290 20:04:30 -- accel/accel.sh@20 -- # val= 00:12:48.290 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.290 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.290 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.290 20:04:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:48.290 20:04:30 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:12:48.290 20:04:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:48.290 00:12:48.290 real 0m1.484s 00:12:48.290 user 0m1.293s 00:12:48.290 sys 0m0.093s 00:12:48.290 20:04:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:48.290 20:04:30 -- common/autotest_common.sh@10 -- # set +x 00:12:48.290 ************************************ 00:12:48.290 END TEST accel_fill 00:12:48.290 ************************************ 00:12:48.290 20:04:30 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:48.290 20:04:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:48.290 20:04:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:48.290 20:04:30 -- common/autotest_common.sh@10 -- # set +x 00:12:48.290 ************************************ 00:12:48.290 START TEST accel_copy_crc32c 00:12:48.290 ************************************ 00:12:48.290 20:04:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:12:48.290 20:04:30 -- accel/accel.sh@16 -- # local accel_opc 00:12:48.290 20:04:30 -- accel/accel.sh@17 -- # local accel_module 00:12:48.290 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.290 20:04:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:48.290 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.290 20:04:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:48.290 20:04:30 -- accel/accel.sh@12 -- # build_accel_config 00:12:48.290 20:04:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:48.290 20:04:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:48.290 20:04:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:48.290 20:04:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:48.290 20:04:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:48.290 20:04:30 -- accel/accel.sh@40 -- # local IFS=, 00:12:48.290 20:04:30 -- accel/accel.sh@41 -- # jq -r . 00:12:48.290 [2024-04-24 20:04:30.268750] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:48.290 [2024-04-24 20:04:30.268878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61399 ] 00:12:48.290 [2024-04-24 20:04:30.397439] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.290 [2024-04-24 20:04:30.509894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val= 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val= 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val=0x1 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val= 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val= 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val=0 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val= 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val=software 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@22 -- # accel_module=software 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val=32 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val=32 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val=1 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val=Yes 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val= 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:48.548 20:04:30 -- accel/accel.sh@20 -- # val= 00:12:48.548 20:04:30 -- accel/accel.sh@21 -- # case "$var" in 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # IFS=: 00:12:48.548 20:04:30 -- accel/accel.sh@19 -- # read -r var val 00:12:49.483 20:04:31 -- accel/accel.sh@20 -- # val= 00:12:49.483 20:04:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.483 20:04:31 -- accel/accel.sh@19 -- # IFS=: 00:12:49.483 20:04:31 -- accel/accel.sh@19 -- # read -r var val 00:12:49.483 20:04:31 -- accel/accel.sh@20 -- # val= 00:12:49.483 20:04:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.483 20:04:31 -- accel/accel.sh@19 -- # IFS=: 00:12:49.483 20:04:31 -- accel/accel.sh@19 -- # read -r var val 00:12:49.483 20:04:31 -- accel/accel.sh@20 -- # val= 00:12:49.483 20:04:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.483 20:04:31 -- accel/accel.sh@19 -- # IFS=: 00:12:49.483 20:04:31 -- accel/accel.sh@19 -- # read -r var val 00:12:49.483 20:04:31 -- accel/accel.sh@20 -- # val= 00:12:49.483 20:04:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.483 20:04:31 -- accel/accel.sh@19 -- # IFS=: 00:12:49.483 20:04:31 -- accel/accel.sh@19 -- # read -r var val 00:12:49.483 20:04:31 -- accel/accel.sh@20 -- # val= 00:12:49.483 20:04:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.483 20:04:31 -- accel/accel.sh@19 -- # IFS=: 00:12:49.483 20:04:31 -- accel/accel.sh@19 -- # read -r var val 00:12:49.483 20:04:31 -- accel/accel.sh@20 -- # val= 00:12:49.483 20:04:31 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.483 20:04:31 -- accel/accel.sh@19 -- # IFS=: 00:12:49.483 20:04:31 -- accel/accel.sh@19 -- # read -r var val 00:12:49.483 20:04:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:49.483 20:04:31 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:49.483 20:04:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:49.483 00:12:49.483 real 0m1.482s 00:12:49.483 user 0m1.280s 00:12:49.483 sys 0m0.091s 00:12:49.483 20:04:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:49.483 20:04:31 -- common/autotest_common.sh@10 -- # set +x 00:12:49.483 ************************************ 00:12:49.483 END TEST accel_copy_crc32c 00:12:49.483 ************************************ 00:12:49.740 20:04:31 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:49.740 20:04:31 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:49.740 20:04:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:49.740 20:04:31 -- common/autotest_common.sh@10 -- # set +x 00:12:49.740 ************************************ 00:12:49.740 START TEST accel_copy_crc32c_C2 00:12:49.740 ************************************ 00:12:49.740 20:04:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:49.740 20:04:31 -- accel/accel.sh@16 -- # local accel_opc 00:12:49.740 20:04:31 -- accel/accel.sh@17 -- # local accel_module 00:12:49.740 20:04:31 -- accel/accel.sh@19 -- # IFS=: 00:12:49.740 20:04:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:49.740 20:04:31 -- accel/accel.sh@19 -- # read -r var val 00:12:49.740 20:04:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:49.740 20:04:31 -- accel/accel.sh@12 -- # build_accel_config 00:12:49.740 20:04:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:49.740 20:04:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:49.740 20:04:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:49.740 20:04:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:49.740 20:04:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:49.740 20:04:31 -- accel/accel.sh@40 -- # local IFS=, 00:12:49.741 20:04:31 -- accel/accel.sh@41 -- # jq -r . 00:12:49.741 [2024-04-24 20:04:31.814309] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:49.741 [2024-04-24 20:04:31.814724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61443 ] 00:12:49.741 [2024-04-24 20:04:31.940083] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.999 [2024-04-24 20:04:32.058162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val= 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val= 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val=0x1 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val= 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val= 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val=0 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val='8192 bytes' 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val= 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val=software 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@22 -- # accel_module=software 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val=32 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val=32 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val=1 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val=Yes 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val= 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:49.999 20:04:32 -- accel/accel.sh@20 -- # val= 00:12:49.999 20:04:32 -- accel/accel.sh@21 -- # case "$var" in 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # IFS=: 00:12:49.999 20:04:32 -- accel/accel.sh@19 -- # read -r var val 00:12:51.374 20:04:33 -- accel/accel.sh@20 -- # val= 00:12:51.374 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.374 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.374 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.374 20:04:33 -- accel/accel.sh@20 -- # val= 00:12:51.374 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.374 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.374 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.374 20:04:33 -- accel/accel.sh@20 -- # val= 00:12:51.374 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.374 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.374 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.374 20:04:33 -- accel/accel.sh@20 -- # val= 00:12:51.374 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.374 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.374 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.374 20:04:33 -- accel/accel.sh@20 -- # val= 00:12:51.374 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.374 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.374 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.374 20:04:33 -- accel/accel.sh@20 -- # val= 00:12:51.374 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.374 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.374 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.374 20:04:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:51.374 20:04:33 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:51.374 20:04:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:51.374 00:12:51.374 real 0m1.473s 00:12:51.374 user 0m1.284s 00:12:51.374 sys 0m0.089s 00:12:51.374 20:04:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:51.374 20:04:33 -- common/autotest_common.sh@10 -- # set +x 00:12:51.374 ************************************ 00:12:51.374 END TEST accel_copy_crc32c_C2 00:12:51.374 ************************************ 00:12:51.374 20:04:33 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:51.374 20:04:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:51.374 20:04:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:51.374 20:04:33 -- common/autotest_common.sh@10 -- # set +x 00:12:51.374 ************************************ 00:12:51.374 START TEST accel_dualcast 00:12:51.374 ************************************ 00:12:51.374 20:04:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:12:51.374 20:04:33 -- accel/accel.sh@16 -- # local accel_opc 00:12:51.374 20:04:33 -- accel/accel.sh@17 -- # local accel_module 00:12:51.374 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.374 20:04:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:51.374 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.374 20:04:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:51.374 20:04:33 -- accel/accel.sh@12 -- # build_accel_config 00:12:51.374 20:04:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:51.374 20:04:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:51.374 20:04:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:51.374 20:04:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:51.374 20:04:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:51.374 20:04:33 -- accel/accel.sh@40 -- # local IFS=, 00:12:51.374 20:04:33 -- accel/accel.sh@41 -- # jq -r . 00:12:51.374 [2024-04-24 20:04:33.457599] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:51.374 [2024-04-24 20:04:33.457780] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61477 ] 00:12:51.374 [2024-04-24 20:04:33.597887] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.634 [2024-04-24 20:04:33.692751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.634 20:04:33 -- accel/accel.sh@20 -- # val= 00:12:51.634 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.634 20:04:33 -- accel/accel.sh@20 -- # val= 00:12:51.634 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.634 20:04:33 -- accel/accel.sh@20 -- # val=0x1 00:12:51.634 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.634 20:04:33 -- accel/accel.sh@20 -- # val= 00:12:51.634 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.634 20:04:33 -- accel/accel.sh@20 -- # val= 00:12:51.634 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.634 20:04:33 -- accel/accel.sh@20 -- # val=dualcast 00:12:51.634 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.634 20:04:33 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.634 20:04:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:51.634 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.634 20:04:33 -- accel/accel.sh@20 -- # val= 00:12:51.634 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.634 20:04:33 -- accel/accel.sh@20 -- # val=software 00:12:51.634 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.634 20:04:33 -- accel/accel.sh@22 -- # accel_module=software 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.634 20:04:33 -- accel/accel.sh@20 -- # val=32 00:12:51.634 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.634 20:04:33 -- accel/accel.sh@20 -- # val=32 00:12:51.634 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.634 20:04:33 -- accel/accel.sh@20 -- # val=1 00:12:51.634 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.634 20:04:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:51.634 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.634 20:04:33 -- accel/accel.sh@20 -- # val=Yes 00:12:51.634 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.634 20:04:33 -- accel/accel.sh@20 -- # val= 00:12:51.634 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:51.634 20:04:33 -- accel/accel.sh@20 -- # val= 00:12:51.634 20:04:33 -- accel/accel.sh@21 -- # case "$var" in 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # IFS=: 00:12:51.634 20:04:33 -- accel/accel.sh@19 -- # read -r var val 00:12:53.016 20:04:34 -- accel/accel.sh@20 -- # val= 00:12:53.016 20:04:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.016 20:04:34 -- accel/accel.sh@19 -- # IFS=: 00:12:53.016 20:04:34 -- accel/accel.sh@19 -- # read -r var val 00:12:53.016 20:04:34 -- accel/accel.sh@20 -- # val= 00:12:53.016 20:04:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.016 20:04:34 -- accel/accel.sh@19 -- # IFS=: 00:12:53.016 20:04:34 -- accel/accel.sh@19 -- # read -r var val 00:12:53.016 20:04:34 -- accel/accel.sh@20 -- # val= 00:12:53.016 20:04:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.016 20:04:34 -- accel/accel.sh@19 -- # IFS=: 00:12:53.016 20:04:34 -- accel/accel.sh@19 -- # read -r var val 00:12:53.016 20:04:34 -- accel/accel.sh@20 -- # val= 00:12:53.016 20:04:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.016 20:04:34 -- accel/accel.sh@19 -- # IFS=: 00:12:53.016 20:04:34 -- accel/accel.sh@19 -- # read -r var val 00:12:53.016 20:04:34 -- accel/accel.sh@20 -- # val= 00:12:53.016 20:04:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.016 20:04:34 -- accel/accel.sh@19 -- # IFS=: 00:12:53.016 20:04:34 -- accel/accel.sh@19 -- # read -r var val 00:12:53.016 20:04:34 -- accel/accel.sh@20 -- # val= 00:12:53.016 ************************************ 00:12:53.016 END TEST accel_dualcast 00:12:53.016 ************************************ 00:12:53.016 20:04:34 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.016 20:04:34 -- accel/accel.sh@19 -- # IFS=: 00:12:53.016 20:04:34 -- accel/accel.sh@19 -- # read -r var val 00:12:53.016 20:04:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:53.016 20:04:34 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:53.016 20:04:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:53.016 00:12:53.016 real 0m1.479s 00:12:53.016 user 0m1.292s 00:12:53.016 sys 0m0.099s 00:12:53.016 20:04:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:53.016 20:04:34 -- common/autotest_common.sh@10 -- # set +x 00:12:53.016 20:04:34 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:53.016 20:04:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:53.016 20:04:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:53.016 20:04:34 -- common/autotest_common.sh@10 -- # set +x 00:12:53.016 ************************************ 00:12:53.016 START TEST accel_compare 00:12:53.016 ************************************ 00:12:53.016 20:04:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:12:53.016 20:04:35 -- accel/accel.sh@16 -- # local accel_opc 00:12:53.016 20:04:35 -- accel/accel.sh@17 -- # local accel_module 00:12:53.016 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.016 20:04:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:53.016 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:53.016 20:04:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:53.016 20:04:35 -- accel/accel.sh@12 -- # build_accel_config 00:12:53.016 20:04:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:53.016 20:04:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:53.016 20:04:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:53.016 20:04:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:53.016 20:04:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:53.016 20:04:35 -- accel/accel.sh@40 -- # local IFS=, 00:12:53.016 20:04:35 -- accel/accel.sh@41 -- # jq -r . 00:12:53.016 [2024-04-24 20:04:35.067586] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:53.016 [2024-04-24 20:04:35.067712] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61521 ] 00:12:53.016 [2024-04-24 20:04:35.209041] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.275 [2024-04-24 20:04:35.308168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.275 20:04:35 -- accel/accel.sh@20 -- # val= 00:12:53.275 20:04:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:53.275 20:04:35 -- accel/accel.sh@20 -- # val= 00:12:53.275 20:04:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:53.275 20:04:35 -- accel/accel.sh@20 -- # val=0x1 00:12:53.275 20:04:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:53.275 20:04:35 -- accel/accel.sh@20 -- # val= 00:12:53.275 20:04:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:53.275 20:04:35 -- accel/accel.sh@20 -- # val= 00:12:53.275 20:04:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:53.275 20:04:35 -- accel/accel.sh@20 -- # val=compare 00:12:53.275 20:04:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.275 20:04:35 -- accel/accel.sh@23 -- # accel_opc=compare 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:53.275 20:04:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:53.275 20:04:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:53.275 20:04:35 -- accel/accel.sh@20 -- # val= 00:12:53.275 20:04:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:53.275 20:04:35 -- accel/accel.sh@20 -- # val=software 00:12:53.275 20:04:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.275 20:04:35 -- accel/accel.sh@22 -- # accel_module=software 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:53.275 20:04:35 -- accel/accel.sh@20 -- # val=32 00:12:53.275 20:04:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:53.275 20:04:35 -- accel/accel.sh@20 -- # val=32 00:12:53.275 20:04:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:53.275 20:04:35 -- accel/accel.sh@20 -- # val=1 00:12:53.275 20:04:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:53.275 20:04:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:53.275 20:04:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:53.275 20:04:35 -- accel/accel.sh@20 -- # val=Yes 00:12:53.275 20:04:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:53.275 20:04:35 -- accel/accel.sh@20 -- # val= 00:12:53.275 20:04:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:53.275 20:04:35 -- accel/accel.sh@20 -- # val= 00:12:53.275 20:04:35 -- accel/accel.sh@21 -- # case "$var" in 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # IFS=: 00:12:53.275 20:04:35 -- accel/accel.sh@19 -- # read -r var val 00:12:54.651 20:04:36 -- accel/accel.sh@20 -- # val= 00:12:54.651 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.651 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.651 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.651 20:04:36 -- accel/accel.sh@20 -- # val= 00:12:54.651 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.651 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.651 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.651 20:04:36 -- accel/accel.sh@20 -- # val= 00:12:54.651 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.651 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.651 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.651 20:04:36 -- accel/accel.sh@20 -- # val= 00:12:54.651 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.651 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.651 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.651 20:04:36 -- accel/accel.sh@20 -- # val= 00:12:54.651 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.651 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.651 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.651 20:04:36 -- accel/accel.sh@20 -- # val= 00:12:54.651 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.651 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.651 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.651 20:04:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:54.651 20:04:36 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:12:54.651 20:04:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:54.651 00:12:54.651 real 0m1.484s 00:12:54.651 user 0m1.300s 00:12:54.651 sys 0m0.094s 00:12:54.651 20:04:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:54.651 ************************************ 00:12:54.651 END TEST accel_compare 00:12:54.651 ************************************ 00:12:54.651 20:04:36 -- common/autotest_common.sh@10 -- # set +x 00:12:54.651 20:04:36 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:54.651 20:04:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:54.651 20:04:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:54.651 20:04:36 -- common/autotest_common.sh@10 -- # set +x 00:12:54.651 ************************************ 00:12:54.651 START TEST accel_xor 00:12:54.651 ************************************ 00:12:54.651 20:04:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:12:54.651 20:04:36 -- accel/accel.sh@16 -- # local accel_opc 00:12:54.651 20:04:36 -- accel/accel.sh@17 -- # local accel_module 00:12:54.651 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.651 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.651 20:04:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:54.651 20:04:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:54.651 20:04:36 -- accel/accel.sh@12 -- # build_accel_config 00:12:54.651 20:04:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:54.651 20:04:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:54.651 20:04:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:54.651 20:04:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:54.651 20:04:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:54.651 20:04:36 -- accel/accel.sh@40 -- # local IFS=, 00:12:54.651 20:04:36 -- accel/accel.sh@41 -- # jq -r . 00:12:54.651 [2024-04-24 20:04:36.678471] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:54.651 [2024-04-24 20:04:36.678833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61556 ] 00:12:54.651 [2024-04-24 20:04:36.818312] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.909 [2024-04-24 20:04:36.914513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.909 20:04:36 -- accel/accel.sh@20 -- # val= 00:12:54.909 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.909 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.909 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.909 20:04:36 -- accel/accel.sh@20 -- # val= 00:12:54.909 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.909 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.909 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.909 20:04:36 -- accel/accel.sh@20 -- # val=0x1 00:12:54.909 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.909 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.909 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.909 20:04:36 -- accel/accel.sh@20 -- # val= 00:12:54.910 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.910 20:04:36 -- accel/accel.sh@20 -- # val= 00:12:54.910 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.910 20:04:36 -- accel/accel.sh@20 -- # val=xor 00:12:54.910 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.910 20:04:36 -- accel/accel.sh@23 -- # accel_opc=xor 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.910 20:04:36 -- accel/accel.sh@20 -- # val=2 00:12:54.910 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.910 20:04:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:54.910 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.910 20:04:36 -- accel/accel.sh@20 -- # val= 00:12:54.910 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.910 20:04:36 -- accel/accel.sh@20 -- # val=software 00:12:54.910 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.910 20:04:36 -- accel/accel.sh@22 -- # accel_module=software 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.910 20:04:36 -- accel/accel.sh@20 -- # val=32 00:12:54.910 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.910 20:04:36 -- accel/accel.sh@20 -- # val=32 00:12:54.910 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.910 20:04:36 -- accel/accel.sh@20 -- # val=1 00:12:54.910 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.910 20:04:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:54.910 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.910 20:04:36 -- accel/accel.sh@20 -- # val=Yes 00:12:54.910 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.910 20:04:36 -- accel/accel.sh@20 -- # val= 00:12:54.910 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:54.910 20:04:36 -- accel/accel.sh@20 -- # val= 00:12:54.910 20:04:36 -- accel/accel.sh@21 -- # case "$var" in 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # IFS=: 00:12:54.910 20:04:36 -- accel/accel.sh@19 -- # read -r var val 00:12:56.288 20:04:38 -- accel/accel.sh@20 -- # val= 00:12:56.288 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.288 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.288 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.288 20:04:38 -- accel/accel.sh@20 -- # val= 00:12:56.288 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.288 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.288 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.288 20:04:38 -- accel/accel.sh@20 -- # val= 00:12:56.288 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.288 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.288 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.288 20:04:38 -- accel/accel.sh@20 -- # val= 00:12:56.288 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.288 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.288 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.288 20:04:38 -- accel/accel.sh@20 -- # val= 00:12:56.288 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.288 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.288 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.288 20:04:38 -- accel/accel.sh@20 -- # val= 00:12:56.288 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.288 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.288 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.288 20:04:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:56.288 20:04:38 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:56.288 20:04:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:56.288 00:12:56.288 real 0m1.470s 00:12:56.288 user 0m1.292s 00:12:56.288 sys 0m0.089s 00:12:56.288 ************************************ 00:12:56.288 END TEST accel_xor 00:12:56.288 ************************************ 00:12:56.288 20:04:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:56.288 20:04:38 -- common/autotest_common.sh@10 -- # set +x 00:12:56.288 20:04:38 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:56.288 20:04:38 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:12:56.288 20:04:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:56.288 20:04:38 -- common/autotest_common.sh@10 -- # set +x 00:12:56.288 ************************************ 00:12:56.288 START TEST accel_xor 00:12:56.288 ************************************ 00:12:56.288 20:04:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:12:56.288 20:04:38 -- accel/accel.sh@16 -- # local accel_opc 00:12:56.288 20:04:38 -- accel/accel.sh@17 -- # local accel_module 00:12:56.288 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.288 20:04:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:56.288 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.288 20:04:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:56.288 20:04:38 -- accel/accel.sh@12 -- # build_accel_config 00:12:56.288 20:04:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:56.288 20:04:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:56.288 20:04:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:56.288 20:04:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:56.288 20:04:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:56.288 20:04:38 -- accel/accel.sh@40 -- # local IFS=, 00:12:56.288 20:04:38 -- accel/accel.sh@41 -- # jq -r . 00:12:56.288 [2024-04-24 20:04:38.279770] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:56.288 [2024-04-24 20:04:38.279900] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61600 ] 00:12:56.288 [2024-04-24 20:04:38.421368] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.288 [2024-04-24 20:04:38.520092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.548 20:04:38 -- accel/accel.sh@20 -- # val= 00:12:56.548 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.548 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.548 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.548 20:04:38 -- accel/accel.sh@20 -- # val= 00:12:56.548 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.548 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.548 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.548 20:04:38 -- accel/accel.sh@20 -- # val=0x1 00:12:56.548 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.548 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.548 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.548 20:04:38 -- accel/accel.sh@20 -- # val= 00:12:56.548 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.548 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.548 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.548 20:04:38 -- accel/accel.sh@20 -- # val= 00:12:56.548 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.548 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.548 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.548 20:04:38 -- accel/accel.sh@20 -- # val=xor 00:12:56.549 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.549 20:04:38 -- accel/accel.sh@23 -- # accel_opc=xor 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.549 20:04:38 -- accel/accel.sh@20 -- # val=3 00:12:56.549 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.549 20:04:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:56.549 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.549 20:04:38 -- accel/accel.sh@20 -- # val= 00:12:56.549 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.549 20:04:38 -- accel/accel.sh@20 -- # val=software 00:12:56.549 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.549 20:04:38 -- accel/accel.sh@22 -- # accel_module=software 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.549 20:04:38 -- accel/accel.sh@20 -- # val=32 00:12:56.549 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.549 20:04:38 -- accel/accel.sh@20 -- # val=32 00:12:56.549 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.549 20:04:38 -- accel/accel.sh@20 -- # val=1 00:12:56.549 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.549 20:04:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:56.549 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.549 20:04:38 -- accel/accel.sh@20 -- # val=Yes 00:12:56.549 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.549 20:04:38 -- accel/accel.sh@20 -- # val= 00:12:56.549 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:56.549 20:04:38 -- accel/accel.sh@20 -- # val= 00:12:56.549 20:04:38 -- accel/accel.sh@21 -- # case "$var" in 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # IFS=: 00:12:56.549 20:04:38 -- accel/accel.sh@19 -- # read -r var val 00:12:57.537 20:04:39 -- accel/accel.sh@20 -- # val= 00:12:57.537 20:04:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.537 20:04:39 -- accel/accel.sh@19 -- # IFS=: 00:12:57.537 20:04:39 -- accel/accel.sh@19 -- # read -r var val 00:12:57.537 20:04:39 -- accel/accel.sh@20 -- # val= 00:12:57.537 20:04:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.537 20:04:39 -- accel/accel.sh@19 -- # IFS=: 00:12:57.537 20:04:39 -- accel/accel.sh@19 -- # read -r var val 00:12:57.537 20:04:39 -- accel/accel.sh@20 -- # val= 00:12:57.537 20:04:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.537 20:04:39 -- accel/accel.sh@19 -- # IFS=: 00:12:57.537 20:04:39 -- accel/accel.sh@19 -- # read -r var val 00:12:57.537 20:04:39 -- accel/accel.sh@20 -- # val= 00:12:57.537 20:04:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.537 20:04:39 -- accel/accel.sh@19 -- # IFS=: 00:12:57.537 20:04:39 -- accel/accel.sh@19 -- # read -r var val 00:12:57.537 20:04:39 -- accel/accel.sh@20 -- # val= 00:12:57.537 20:04:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.537 20:04:39 -- accel/accel.sh@19 -- # IFS=: 00:12:57.537 20:04:39 -- accel/accel.sh@19 -- # read -r var val 00:12:57.537 20:04:39 -- accel/accel.sh@20 -- # val= 00:12:57.537 20:04:39 -- accel/accel.sh@21 -- # case "$var" in 00:12:57.537 20:04:39 -- accel/accel.sh@19 -- # IFS=: 00:12:57.537 ************************************ 00:12:57.537 END TEST accel_xor 00:12:57.537 ************************************ 00:12:57.537 20:04:39 -- accel/accel.sh@19 -- # read -r var val 00:12:57.537 20:04:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:57.537 20:04:39 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:57.537 20:04:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:57.537 00:12:57.537 real 0m1.498s 00:12:57.537 user 0m1.313s 00:12:57.537 sys 0m0.093s 00:12:57.537 20:04:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:57.537 20:04:39 -- common/autotest_common.sh@10 -- # set +x 00:12:57.537 20:04:39 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:12:57.537 20:04:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:12:57.537 20:04:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:57.537 20:04:39 -- common/autotest_common.sh@10 -- # set +x 00:12:57.796 ************************************ 00:12:57.796 START TEST accel_dif_verify 00:12:57.796 ************************************ 00:12:57.796 20:04:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:12:57.796 20:04:39 -- accel/accel.sh@16 -- # local accel_opc 00:12:57.796 20:04:39 -- accel/accel.sh@17 -- # local accel_module 00:12:57.796 20:04:39 -- accel/accel.sh@19 -- # IFS=: 00:12:57.796 20:04:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:12:57.796 20:04:39 -- accel/accel.sh@19 -- # read -r var val 00:12:57.796 20:04:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:12:57.796 20:04:39 -- accel/accel.sh@12 -- # build_accel_config 00:12:57.796 20:04:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:57.796 20:04:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:57.796 20:04:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:57.796 20:04:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:57.796 20:04:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:57.796 20:04:39 -- accel/accel.sh@40 -- # local IFS=, 00:12:57.796 20:04:39 -- accel/accel.sh@41 -- # jq -r . 00:12:57.796 [2024-04-24 20:04:39.870596] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:57.796 [2024-04-24 20:04:39.870711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61639 ] 00:12:57.796 [2024-04-24 20:04:39.993976] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.055 [2024-04-24 20:04:40.098322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val= 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val= 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val=0x1 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val= 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val= 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val=dif_verify 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val='512 bytes' 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val='8 bytes' 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val= 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val=software 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@22 -- # accel_module=software 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val=32 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val=32 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val=1 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val=No 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val= 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:58.055 20:04:40 -- accel/accel.sh@20 -- # val= 00:12:58.055 20:04:40 -- accel/accel.sh@21 -- # case "$var" in 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # IFS=: 00:12:58.055 20:04:40 -- accel/accel.sh@19 -- # read -r var val 00:12:59.436 20:04:41 -- accel/accel.sh@20 -- # val= 00:12:59.436 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.436 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.436 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.436 20:04:41 -- accel/accel.sh@20 -- # val= 00:12:59.436 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.436 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.436 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.436 20:04:41 -- accel/accel.sh@20 -- # val= 00:12:59.436 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.436 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.436 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.436 20:04:41 -- accel/accel.sh@20 -- # val= 00:12:59.436 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.436 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.436 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.436 20:04:41 -- accel/accel.sh@20 -- # val= 00:12:59.436 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.436 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.436 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.436 20:04:41 -- accel/accel.sh@20 -- # val= 00:12:59.436 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.436 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.436 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.436 20:04:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:59.436 20:04:41 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:12:59.436 20:04:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:59.436 00:12:59.436 real 0m1.458s 00:12:59.436 user 0m1.276s 00:12:59.436 sys 0m0.086s 00:12:59.436 20:04:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:59.436 20:04:41 -- common/autotest_common.sh@10 -- # set +x 00:12:59.436 ************************************ 00:12:59.436 END TEST accel_dif_verify 00:12:59.436 ************************************ 00:12:59.436 20:04:41 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:12:59.436 20:04:41 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:12:59.436 20:04:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:59.436 20:04:41 -- common/autotest_common.sh@10 -- # set +x 00:12:59.436 ************************************ 00:12:59.436 START TEST accel_dif_generate 00:12:59.436 ************************************ 00:12:59.436 20:04:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:12:59.436 20:04:41 -- accel/accel.sh@16 -- # local accel_opc 00:12:59.436 20:04:41 -- accel/accel.sh@17 -- # local accel_module 00:12:59.436 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.436 20:04:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:12:59.436 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.436 20:04:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:59.436 20:04:41 -- accel/accel.sh@12 -- # build_accel_config 00:12:59.436 20:04:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:59.436 20:04:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:59.436 20:04:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:59.436 20:04:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:59.436 20:04:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:59.436 20:04:41 -- accel/accel.sh@40 -- # local IFS=, 00:12:59.436 20:04:41 -- accel/accel.sh@41 -- # jq -r . 00:12:59.436 [2024-04-24 20:04:41.482653] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:12:59.436 [2024-04-24 20:04:41.482745] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61677 ] 00:12:59.436 [2024-04-24 20:04:41.621661] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.696 [2024-04-24 20:04:41.722084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val= 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val= 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val=0x1 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val= 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val= 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val=dif_generate 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val='512 bytes' 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val='8 bytes' 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val= 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val=software 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@22 -- # accel_module=software 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val=32 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val=32 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val=1 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val=No 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val= 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:12:59.696 20:04:41 -- accel/accel.sh@20 -- # val= 00:12:59.696 20:04:41 -- accel/accel.sh@21 -- # case "$var" in 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # IFS=: 00:12:59.696 20:04:41 -- accel/accel.sh@19 -- # read -r var val 00:13:01.078 20:04:42 -- accel/accel.sh@20 -- # val= 00:13:01.078 20:04:42 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.078 20:04:42 -- accel/accel.sh@19 -- # IFS=: 00:13:01.078 20:04:42 -- accel/accel.sh@19 -- # read -r var val 00:13:01.078 20:04:42 -- accel/accel.sh@20 -- # val= 00:13:01.078 20:04:42 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.078 20:04:42 -- accel/accel.sh@19 -- # IFS=: 00:13:01.078 20:04:42 -- accel/accel.sh@19 -- # read -r var val 00:13:01.078 20:04:42 -- accel/accel.sh@20 -- # val= 00:13:01.078 20:04:42 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.078 20:04:42 -- accel/accel.sh@19 -- # IFS=: 00:13:01.078 20:04:42 -- accel/accel.sh@19 -- # read -r var val 00:13:01.078 20:04:42 -- accel/accel.sh@20 -- # val= 00:13:01.078 20:04:42 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.078 20:04:42 -- accel/accel.sh@19 -- # IFS=: 00:13:01.078 20:04:42 -- accel/accel.sh@19 -- # read -r var val 00:13:01.078 20:04:42 -- accel/accel.sh@20 -- # val= 00:13:01.078 20:04:42 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.078 20:04:42 -- accel/accel.sh@19 -- # IFS=: 00:13:01.078 20:04:42 -- accel/accel.sh@19 -- # read -r var val 00:13:01.078 20:04:42 -- accel/accel.sh@20 -- # val= 00:13:01.078 20:04:42 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.078 20:04:42 -- accel/accel.sh@19 -- # IFS=: 00:13:01.078 20:04:42 -- accel/accel.sh@19 -- # read -r var val 00:13:01.078 20:04:42 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:01.078 20:04:42 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:13:01.078 20:04:42 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:01.078 00:13:01.078 real 0m1.479s 00:13:01.078 user 0m1.291s 00:13:01.078 sys 0m0.100s 00:13:01.078 20:04:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:01.078 20:04:42 -- common/autotest_common.sh@10 -- # set +x 00:13:01.078 ************************************ 00:13:01.078 END TEST accel_dif_generate 00:13:01.078 ************************************ 00:13:01.078 20:04:42 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:13:01.078 20:04:42 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:13:01.078 20:04:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.078 20:04:42 -- common/autotest_common.sh@10 -- # set +x 00:13:01.078 ************************************ 00:13:01.078 START TEST accel_dif_generate_copy 00:13:01.078 ************************************ 00:13:01.078 20:04:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:13:01.079 20:04:43 -- accel/accel.sh@16 -- # local accel_opc 00:13:01.079 20:04:43 -- accel/accel.sh@17 -- # local accel_module 00:13:01.079 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.079 20:04:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:13:01.079 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.079 20:04:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:13:01.079 20:04:43 -- accel/accel.sh@12 -- # build_accel_config 00:13:01.079 20:04:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:01.079 20:04:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:01.079 20:04:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:01.079 20:04:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:01.079 20:04:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:01.079 20:04:43 -- accel/accel.sh@40 -- # local IFS=, 00:13:01.079 20:04:43 -- accel/accel.sh@41 -- # jq -r . 00:13:01.079 [2024-04-24 20:04:43.103217] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:01.079 [2024-04-24 20:04:43.103371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61717 ] 00:13:01.079 [2024-04-24 20:04:43.243323] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.338 [2024-04-24 20:04:43.341377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val= 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val= 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val=0x1 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val= 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val= 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val= 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val=software 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@22 -- # accel_module=software 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val=32 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val=32 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val=1 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val=No 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val= 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:01.338 20:04:43 -- accel/accel.sh@20 -- # val= 00:13:01.338 20:04:43 -- accel/accel.sh@21 -- # case "$var" in 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # IFS=: 00:13:01.338 20:04:43 -- accel/accel.sh@19 -- # read -r var val 00:13:02.726 20:04:44 -- accel/accel.sh@20 -- # val= 00:13:02.726 20:04:44 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.726 20:04:44 -- accel/accel.sh@19 -- # IFS=: 00:13:02.726 20:04:44 -- accel/accel.sh@19 -- # read -r var val 00:13:02.726 20:04:44 -- accel/accel.sh@20 -- # val= 00:13:02.726 20:04:44 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.726 20:04:44 -- accel/accel.sh@19 -- # IFS=: 00:13:02.726 20:04:44 -- accel/accel.sh@19 -- # read -r var val 00:13:02.726 20:04:44 -- accel/accel.sh@20 -- # val= 00:13:02.726 20:04:44 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.726 20:04:44 -- accel/accel.sh@19 -- # IFS=: 00:13:02.726 20:04:44 -- accel/accel.sh@19 -- # read -r var val 00:13:02.726 20:04:44 -- accel/accel.sh@20 -- # val= 00:13:02.726 20:04:44 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.726 20:04:44 -- accel/accel.sh@19 -- # IFS=: 00:13:02.726 20:04:44 -- accel/accel.sh@19 -- # read -r var val 00:13:02.726 20:04:44 -- accel/accel.sh@20 -- # val= 00:13:02.726 20:04:44 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.726 20:04:44 -- accel/accel.sh@19 -- # IFS=: 00:13:02.726 20:04:44 -- accel/accel.sh@19 -- # read -r var val 00:13:02.726 20:04:44 -- accel/accel.sh@20 -- # val= 00:13:02.726 20:04:44 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.726 20:04:44 -- accel/accel.sh@19 -- # IFS=: 00:13:02.726 20:04:44 -- accel/accel.sh@19 -- # read -r var val 00:13:02.726 20:04:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:02.726 20:04:44 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:13:02.726 20:04:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:02.726 00:13:02.726 real 0m1.480s 00:13:02.726 user 0m1.294s 00:13:02.726 sys 0m0.097s 00:13:02.726 20:04:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:02.726 20:04:44 -- common/autotest_common.sh@10 -- # set +x 00:13:02.726 ************************************ 00:13:02.726 END TEST accel_dif_generate_copy 00:13:02.726 ************************************ 00:13:02.726 20:04:44 -- accel/accel.sh@115 -- # [[ y == y ]] 00:13:02.726 20:04:44 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:02.726 20:04:44 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:13:02.726 20:04:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.726 20:04:44 -- common/autotest_common.sh@10 -- # set +x 00:13:02.726 ************************************ 00:13:02.726 START TEST accel_comp 00:13:02.726 ************************************ 00:13:02.726 20:04:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:02.726 20:04:44 -- accel/accel.sh@16 -- # local accel_opc 00:13:02.726 20:04:44 -- accel/accel.sh@17 -- # local accel_module 00:13:02.726 20:04:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:02.726 20:04:44 -- accel/accel.sh@19 -- # IFS=: 00:13:02.726 20:04:44 -- accel/accel.sh@19 -- # read -r var val 00:13:02.726 20:04:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:02.726 20:04:44 -- accel/accel.sh@12 -- # build_accel_config 00:13:02.726 20:04:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:02.726 20:04:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:02.726 20:04:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:02.726 20:04:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:02.726 20:04:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:02.726 20:04:44 -- accel/accel.sh@40 -- # local IFS=, 00:13:02.726 20:04:44 -- accel/accel.sh@41 -- # jq -r . 00:13:02.726 [2024-04-24 20:04:44.734763] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:02.726 [2024-04-24 20:04:44.734871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61755 ] 00:13:02.726 [2024-04-24 20:04:44.877052] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.984 [2024-04-24 20:04:44.979126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val= 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val= 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val= 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val=0x1 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val= 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val= 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val=compress 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@23 -- # accel_opc=compress 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val= 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val=software 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@22 -- # accel_module=software 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val=32 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val=32 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val=1 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val=No 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val= 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:02.984 20:04:45 -- accel/accel.sh@20 -- # val= 00:13:02.984 20:04:45 -- accel/accel.sh@21 -- # case "$var" in 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # IFS=: 00:13:02.984 20:04:45 -- accel/accel.sh@19 -- # read -r var val 00:13:04.363 20:04:46 -- accel/accel.sh@20 -- # val= 00:13:04.363 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.363 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.363 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.363 20:04:46 -- accel/accel.sh@20 -- # val= 00:13:04.363 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.363 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.363 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.363 20:04:46 -- accel/accel.sh@20 -- # val= 00:13:04.363 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.363 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.363 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.363 20:04:46 -- accel/accel.sh@20 -- # val= 00:13:04.363 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.363 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.363 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.363 20:04:46 -- accel/accel.sh@20 -- # val= 00:13:04.363 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.363 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.363 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.363 20:04:46 -- accel/accel.sh@20 -- # val= 00:13:04.363 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.363 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.363 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.363 20:04:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:04.363 20:04:46 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:13:04.363 20:04:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:04.363 00:13:04.363 real 0m1.501s 00:13:04.363 user 0m1.304s 00:13:04.363 sys 0m0.108s 00:13:04.363 20:04:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:04.363 20:04:46 -- common/autotest_common.sh@10 -- # set +x 00:13:04.363 ************************************ 00:13:04.363 END TEST accel_comp 00:13:04.363 ************************************ 00:13:04.363 20:04:46 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:04.363 20:04:46 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:13:04.363 20:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:04.363 20:04:46 -- common/autotest_common.sh@10 -- # set +x 00:13:04.363 ************************************ 00:13:04.363 START TEST accel_decomp 00:13:04.363 ************************************ 00:13:04.363 20:04:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:04.363 20:04:46 -- accel/accel.sh@16 -- # local accel_opc 00:13:04.363 20:04:46 -- accel/accel.sh@17 -- # local accel_module 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.364 20:04:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.364 20:04:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:04.364 20:04:46 -- accel/accel.sh@12 -- # build_accel_config 00:13:04.364 20:04:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:04.364 20:04:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:04.364 20:04:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:04.364 20:04:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:04.364 20:04:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:04.364 20:04:46 -- accel/accel.sh@40 -- # local IFS=, 00:13:04.364 20:04:46 -- accel/accel.sh@41 -- # jq -r . 00:13:04.364 [2024-04-24 20:04:46.304653] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:04.364 [2024-04-24 20:04:46.304721] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61794 ] 00:13:04.364 [2024-04-24 20:04:46.438806] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.364 [2024-04-24 20:04:46.551876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.364 20:04:46 -- accel/accel.sh@20 -- # val= 00:13:04.364 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.364 20:04:46 -- accel/accel.sh@20 -- # val= 00:13:04.364 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.364 20:04:46 -- accel/accel.sh@20 -- # val= 00:13:04.364 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.364 20:04:46 -- accel/accel.sh@20 -- # val=0x1 00:13:04.364 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.364 20:04:46 -- accel/accel.sh@20 -- # val= 00:13:04.364 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.364 20:04:46 -- accel/accel.sh@20 -- # val= 00:13:04.364 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.364 20:04:46 -- accel/accel.sh@20 -- # val=decompress 00:13:04.364 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.364 20:04:46 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.364 20:04:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:04.364 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.364 20:04:46 -- accel/accel.sh@20 -- # val= 00:13:04.364 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.364 20:04:46 -- accel/accel.sh@20 -- # val=software 00:13:04.364 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.364 20:04:46 -- accel/accel.sh@22 -- # accel_module=software 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.364 20:04:46 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:04.364 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.364 20:04:46 -- accel/accel.sh@20 -- # val=32 00:13:04.364 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.364 20:04:46 -- accel/accel.sh@20 -- # val=32 00:13:04.364 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.364 20:04:46 -- accel/accel.sh@20 -- # val=1 00:13:04.364 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.364 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.623 20:04:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:04.623 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.623 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.623 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.623 20:04:46 -- accel/accel.sh@20 -- # val=Yes 00:13:04.623 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.623 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.623 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.623 20:04:46 -- accel/accel.sh@20 -- # val= 00:13:04.623 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.623 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.623 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:04.623 20:04:46 -- accel/accel.sh@20 -- # val= 00:13:04.623 20:04:46 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.623 20:04:46 -- accel/accel.sh@19 -- # IFS=: 00:13:04.623 20:04:46 -- accel/accel.sh@19 -- # read -r var val 00:13:05.559 20:04:47 -- accel/accel.sh@20 -- # val= 00:13:05.559 20:04:47 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.559 20:04:47 -- accel/accel.sh@19 -- # IFS=: 00:13:05.559 20:04:47 -- accel/accel.sh@19 -- # read -r var val 00:13:05.559 20:04:47 -- accel/accel.sh@20 -- # val= 00:13:05.559 20:04:47 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.559 20:04:47 -- accel/accel.sh@19 -- # IFS=: 00:13:05.559 20:04:47 -- accel/accel.sh@19 -- # read -r var val 00:13:05.559 20:04:47 -- accel/accel.sh@20 -- # val= 00:13:05.559 20:04:47 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.559 20:04:47 -- accel/accel.sh@19 -- # IFS=: 00:13:05.559 20:04:47 -- accel/accel.sh@19 -- # read -r var val 00:13:05.559 20:04:47 -- accel/accel.sh@20 -- # val= 00:13:05.559 20:04:47 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.559 20:04:47 -- accel/accel.sh@19 -- # IFS=: 00:13:05.559 20:04:47 -- accel/accel.sh@19 -- # read -r var val 00:13:05.559 20:04:47 -- accel/accel.sh@20 -- # val= 00:13:05.559 20:04:47 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.559 20:04:47 -- accel/accel.sh@19 -- # IFS=: 00:13:05.559 20:04:47 -- accel/accel.sh@19 -- # read -r var val 00:13:05.559 20:04:47 -- accel/accel.sh@20 -- # val= 00:13:05.559 20:04:47 -- accel/accel.sh@21 -- # case "$var" in 00:13:05.559 20:04:47 -- accel/accel.sh@19 -- # IFS=: 00:13:05.559 20:04:47 -- accel/accel.sh@19 -- # read -r var val 00:13:05.559 20:04:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:05.559 20:04:47 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:05.559 20:04:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:05.559 00:13:05.559 real 0m1.485s 00:13:05.559 user 0m1.295s 00:13:05.559 sys 0m0.099s 00:13:05.559 20:04:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:05.559 20:04:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.559 ************************************ 00:13:05.559 END TEST accel_decomp 00:13:05.559 ************************************ 00:13:05.819 20:04:47 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:05.819 20:04:47 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:05.819 20:04:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:05.819 20:04:47 -- common/autotest_common.sh@10 -- # set +x 00:13:05.819 ************************************ 00:13:05.819 START TEST accel_decmop_full 00:13:05.819 ************************************ 00:13:05.819 20:04:47 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:05.819 20:04:47 -- accel/accel.sh@16 -- # local accel_opc 00:13:05.819 20:04:47 -- accel/accel.sh@17 -- # local accel_module 00:13:05.819 20:04:47 -- accel/accel.sh@19 -- # IFS=: 00:13:05.819 20:04:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:05.819 20:04:47 -- accel/accel.sh@19 -- # read -r var val 00:13:05.819 20:04:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:05.819 20:04:47 -- accel/accel.sh@12 -- # build_accel_config 00:13:05.819 20:04:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:05.819 20:04:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:05.819 20:04:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:05.819 20:04:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:05.819 20:04:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:05.819 20:04:47 -- accel/accel.sh@40 -- # local IFS=, 00:13:05.819 20:04:47 -- accel/accel.sh@41 -- # jq -r . 00:13:05.819 [2024-04-24 20:04:47.926984] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:05.819 [2024-04-24 20:04:47.927059] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61833 ] 00:13:05.819 [2024-04-24 20:04:48.067896] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.088 [2024-04-24 20:04:48.165270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val= 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val= 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val= 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val=0x1 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val= 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val= 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val=decompress 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val= 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val=software 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@22 -- # accel_module=software 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val=32 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val=32 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val=1 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val=Yes 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val= 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:06.088 20:04:48 -- accel/accel.sh@20 -- # val= 00:13:06.088 20:04:48 -- accel/accel.sh@21 -- # case "$var" in 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # IFS=: 00:13:06.088 20:04:48 -- accel/accel.sh@19 -- # read -r var val 00:13:07.468 20:04:49 -- accel/accel.sh@20 -- # val= 00:13:07.468 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.468 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.468 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.468 20:04:49 -- accel/accel.sh@20 -- # val= 00:13:07.468 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.468 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.468 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.468 20:04:49 -- accel/accel.sh@20 -- # val= 00:13:07.468 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.468 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.468 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.468 20:04:49 -- accel/accel.sh@20 -- # val= 00:13:07.468 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.468 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.468 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.468 20:04:49 -- accel/accel.sh@20 -- # val= 00:13:07.468 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.468 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.468 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.468 20:04:49 -- accel/accel.sh@20 -- # val= 00:13:07.468 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.468 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.468 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.468 20:04:49 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:07.468 20:04:49 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:07.468 20:04:49 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:07.468 00:13:07.468 real 0m1.485s 00:13:07.468 user 0m1.305s 00:13:07.468 sys 0m0.094s 00:13:07.469 20:04:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:07.469 20:04:49 -- common/autotest_common.sh@10 -- # set +x 00:13:07.469 ************************************ 00:13:07.469 END TEST accel_decmop_full 00:13:07.469 ************************************ 00:13:07.469 20:04:49 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:07.469 20:04:49 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:07.469 20:04:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:07.469 20:04:49 -- common/autotest_common.sh@10 -- # set +x 00:13:07.469 ************************************ 00:13:07.469 START TEST accel_decomp_mcore 00:13:07.469 ************************************ 00:13:07.469 20:04:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:07.469 20:04:49 -- accel/accel.sh@16 -- # local accel_opc 00:13:07.469 20:04:49 -- accel/accel.sh@17 -- # local accel_module 00:13:07.469 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.469 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.469 20:04:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:07.469 20:04:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:07.469 20:04:49 -- accel/accel.sh@12 -- # build_accel_config 00:13:07.469 20:04:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:07.469 20:04:49 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:07.469 20:04:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:07.469 20:04:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:07.469 20:04:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:07.469 20:04:49 -- accel/accel.sh@40 -- # local IFS=, 00:13:07.469 20:04:49 -- accel/accel.sh@41 -- # jq -r . 00:13:07.469 [2024-04-24 20:04:49.571554] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:07.469 [2024-04-24 20:04:49.571635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61877 ] 00:13:07.469 [2024-04-24 20:04:49.712522] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.728 [2024-04-24 20:04:49.816580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.728 [2024-04-24 20:04:49.816767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.728 [2024-04-24 20:04:49.816935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.728 [2024-04-24 20:04:49.816940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val= 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val= 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val= 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val=0xf 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val= 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val= 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val=decompress 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val= 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val=software 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@22 -- # accel_module=software 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val=32 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val=32 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val=1 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val=Yes 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val= 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:07.728 20:04:49 -- accel/accel.sh@20 -- # val= 00:13:07.728 20:04:49 -- accel/accel.sh@21 -- # case "$var" in 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # IFS=: 00:13:07.728 20:04:49 -- accel/accel.sh@19 -- # read -r var val 00:13:09.107 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.107 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.107 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.107 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.107 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.107 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.107 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.107 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.107 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.107 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.107 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.107 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.107 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.107 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.107 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.107 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.107 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.107 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.107 20:04:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:09.107 20:04:51 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:09.107 20:04:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:09.107 00:13:09.107 real 0m1.505s 00:13:09.107 user 0m4.631s 00:13:09.107 sys 0m0.112s 00:13:09.107 20:04:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:09.107 ************************************ 00:13:09.107 END TEST accel_decomp_mcore 00:13:09.107 ************************************ 00:13:09.107 20:04:51 -- common/autotest_common.sh@10 -- # set +x 00:13:09.107 20:04:51 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:09.107 20:04:51 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:09.107 20:04:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:09.107 20:04:51 -- common/autotest_common.sh@10 -- # set +x 00:13:09.107 ************************************ 00:13:09.107 START TEST accel_decomp_full_mcore 00:13:09.107 ************************************ 00:13:09.107 20:04:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:09.107 20:04:51 -- accel/accel.sh@16 -- # local accel_opc 00:13:09.107 20:04:51 -- accel/accel.sh@17 -- # local accel_module 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.107 20:04:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:09.107 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.107 20:04:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:09.107 20:04:51 -- accel/accel.sh@12 -- # build_accel_config 00:13:09.107 20:04:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:09.107 20:04:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:09.107 20:04:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:09.107 20:04:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:09.107 20:04:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:09.107 20:04:51 -- accel/accel.sh@40 -- # local IFS=, 00:13:09.107 20:04:51 -- accel/accel.sh@41 -- # jq -r . 00:13:09.107 [2024-04-24 20:04:51.186473] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:09.107 [2024-04-24 20:04:51.186661] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61920 ] 00:13:09.107 [2024-04-24 20:04:51.329516] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.367 [2024-04-24 20:04:51.431689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.367 [2024-04-24 20:04:51.431824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.367 [2024-04-24 20:04:51.432011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.367 [2024-04-24 20:04:51.432014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val=0xf 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val=decompress 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val=software 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@22 -- # accel_module=software 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val=32 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val=32 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val=1 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val=Yes 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:09.367 20:04:51 -- accel/accel.sh@20 -- # val= 00:13:09.367 20:04:51 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # IFS=: 00:13:09.367 20:04:51 -- accel/accel.sh@19 -- # read -r var val 00:13:10.783 20:04:52 -- accel/accel.sh@20 -- # val= 00:13:10.783 20:04:52 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # IFS=: 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # read -r var val 00:13:10.783 20:04:52 -- accel/accel.sh@20 -- # val= 00:13:10.783 20:04:52 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # IFS=: 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # read -r var val 00:13:10.783 20:04:52 -- accel/accel.sh@20 -- # val= 00:13:10.783 20:04:52 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # IFS=: 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # read -r var val 00:13:10.783 20:04:52 -- accel/accel.sh@20 -- # val= 00:13:10.783 20:04:52 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # IFS=: 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # read -r var val 00:13:10.783 20:04:52 -- accel/accel.sh@20 -- # val= 00:13:10.783 20:04:52 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # IFS=: 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # read -r var val 00:13:10.783 20:04:52 -- accel/accel.sh@20 -- # val= 00:13:10.783 20:04:52 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # IFS=: 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # read -r var val 00:13:10.783 20:04:52 -- accel/accel.sh@20 -- # val= 00:13:10.783 20:04:52 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # IFS=: 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # read -r var val 00:13:10.783 20:04:52 -- accel/accel.sh@20 -- # val= 00:13:10.783 20:04:52 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # IFS=: 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # read -r var val 00:13:10.783 20:04:52 -- accel/accel.sh@20 -- # val= 00:13:10.783 20:04:52 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # IFS=: 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # read -r var val 00:13:10.783 20:04:52 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:10.783 20:04:52 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:10.783 20:04:52 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:10.783 00:13:10.783 real 0m1.520s 00:13:10.783 user 0m4.702s 00:13:10.783 sys 0m0.104s 00:13:10.783 20:04:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:10.783 20:04:52 -- common/autotest_common.sh@10 -- # set +x 00:13:10.783 ************************************ 00:13:10.783 END TEST accel_decomp_full_mcore 00:13:10.783 ************************************ 00:13:10.783 20:04:52 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:10.783 20:04:52 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:13:10.783 20:04:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:10.783 20:04:52 -- common/autotest_common.sh@10 -- # set +x 00:13:10.783 ************************************ 00:13:10.783 START TEST accel_decomp_mthread 00:13:10.783 ************************************ 00:13:10.783 20:04:52 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:10.783 20:04:52 -- accel/accel.sh@16 -- # local accel_opc 00:13:10.783 20:04:52 -- accel/accel.sh@17 -- # local accel_module 00:13:10.783 20:04:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # IFS=: 00:13:10.783 20:04:52 -- accel/accel.sh@19 -- # read -r var val 00:13:10.783 20:04:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:10.783 20:04:52 -- accel/accel.sh@12 -- # build_accel_config 00:13:10.783 20:04:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:10.783 20:04:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:10.783 20:04:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:10.783 20:04:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:10.783 20:04:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:10.783 20:04:52 -- accel/accel.sh@40 -- # local IFS=, 00:13:10.783 20:04:52 -- accel/accel.sh@41 -- # jq -r . 00:13:10.783 [2024-04-24 20:04:52.826167] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:10.783 [2024-04-24 20:04:52.826663] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61961 ] 00:13:10.783 [2024-04-24 20:04:52.952462] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.043 [2024-04-24 20:04:53.058857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val= 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val= 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val= 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val=0x1 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val= 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val= 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val=decompress 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val= 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val=software 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@22 -- # accel_module=software 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val=32 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val=32 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val=2 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val=Yes 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val= 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:11.043 20:04:53 -- accel/accel.sh@20 -- # val= 00:13:11.043 20:04:53 -- accel/accel.sh@21 -- # case "$var" in 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # IFS=: 00:13:11.043 20:04:53 -- accel/accel.sh@19 -- # read -r var val 00:13:12.422 20:04:54 -- accel/accel.sh@20 -- # val= 00:13:12.422 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.422 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.422 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.422 20:04:54 -- accel/accel.sh@20 -- # val= 00:13:12.422 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.422 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.422 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.422 20:04:54 -- accel/accel.sh@20 -- # val= 00:13:12.422 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.422 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.422 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.422 20:04:54 -- accel/accel.sh@20 -- # val= 00:13:12.422 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.422 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.422 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.422 20:04:54 -- accel/accel.sh@20 -- # val= 00:13:12.422 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.422 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.422 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.422 20:04:54 -- accel/accel.sh@20 -- # val= 00:13:12.422 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.422 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.422 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.422 20:04:54 -- accel/accel.sh@20 -- # val= 00:13:12.422 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.422 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.422 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.422 20:04:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:12.422 20:04:54 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:12.422 20:04:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:12.422 00:13:12.422 real 0m1.503s 00:13:12.422 user 0m1.317s 00:13:12.422 sys 0m0.090s 00:13:12.422 20:04:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:12.422 20:04:54 -- common/autotest_common.sh@10 -- # set +x 00:13:12.422 ************************************ 00:13:12.422 END TEST accel_decomp_mthread 00:13:12.422 ************************************ 00:13:12.423 20:04:54 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:12.423 20:04:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:12.423 20:04:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:12.423 20:04:54 -- common/autotest_common.sh@10 -- # set +x 00:13:12.423 ************************************ 00:13:12.423 START TEST accel_deomp_full_mthread 00:13:12.423 ************************************ 00:13:12.423 20:04:54 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:12.423 20:04:54 -- accel/accel.sh@16 -- # local accel_opc 00:13:12.423 20:04:54 -- accel/accel.sh@17 -- # local accel_module 00:13:12.423 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.423 20:04:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:12.423 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.423 20:04:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:12.423 20:04:54 -- accel/accel.sh@12 -- # build_accel_config 00:13:12.423 20:04:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:12.423 20:04:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:12.423 20:04:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:12.423 20:04:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:12.423 20:04:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:12.423 20:04:54 -- accel/accel.sh@40 -- # local IFS=, 00:13:12.423 20:04:54 -- accel/accel.sh@41 -- # jq -r . 00:13:12.423 [2024-04-24 20:04:54.387112] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:12.423 [2024-04-24 20:04:54.387203] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62000 ] 00:13:12.423 [2024-04-24 20:04:54.514427] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.423 [2024-04-24 20:04:54.627646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val= 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val= 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val= 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val=0x1 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val= 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val= 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val=decompress 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val= 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val=software 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@22 -- # accel_module=software 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val=32 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val=32 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val=2 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val=Yes 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val= 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:12.684 20:04:54 -- accel/accel.sh@20 -- # val= 00:13:12.684 20:04:54 -- accel/accel.sh@21 -- # case "$var" in 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # IFS=: 00:13:12.684 20:04:54 -- accel/accel.sh@19 -- # read -r var val 00:13:14.065 20:04:55 -- accel/accel.sh@20 -- # val= 00:13:14.065 20:04:55 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.065 20:04:55 -- accel/accel.sh@19 -- # IFS=: 00:13:14.065 20:04:55 -- accel/accel.sh@19 -- # read -r var val 00:13:14.065 20:04:55 -- accel/accel.sh@20 -- # val= 00:13:14.065 20:04:55 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.065 20:04:55 -- accel/accel.sh@19 -- # IFS=: 00:13:14.065 20:04:55 -- accel/accel.sh@19 -- # read -r var val 00:13:14.065 20:04:55 -- accel/accel.sh@20 -- # val= 00:13:14.065 20:04:55 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.065 20:04:55 -- accel/accel.sh@19 -- # IFS=: 00:13:14.065 20:04:55 -- accel/accel.sh@19 -- # read -r var val 00:13:14.065 20:04:55 -- accel/accel.sh@20 -- # val= 00:13:14.065 20:04:55 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.065 20:04:55 -- accel/accel.sh@19 -- # IFS=: 00:13:14.065 20:04:55 -- accel/accel.sh@19 -- # read -r var val 00:13:14.065 20:04:55 -- accel/accel.sh@20 -- # val= 00:13:14.065 20:04:55 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.065 20:04:55 -- accel/accel.sh@19 -- # IFS=: 00:13:14.065 20:04:55 -- accel/accel.sh@19 -- # read -r var val 00:13:14.065 20:04:55 -- accel/accel.sh@20 -- # val= 00:13:14.065 20:04:55 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.065 20:04:55 -- accel/accel.sh@19 -- # IFS=: 00:13:14.065 20:04:55 -- accel/accel.sh@19 -- # read -r var val 00:13:14.065 20:04:55 -- accel/accel.sh@20 -- # val= 00:13:14.065 20:04:55 -- accel/accel.sh@21 -- # case "$var" in 00:13:14.065 20:04:55 -- accel/accel.sh@19 -- # IFS=: 00:13:14.065 20:04:55 -- accel/accel.sh@19 -- # read -r var val 00:13:14.065 20:04:55 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:14.065 20:04:55 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:14.065 20:04:55 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:14.065 00:13:14.065 real 0m1.532s 00:13:14.065 user 0m1.336s 00:13:14.065 sys 0m0.092s 00:13:14.065 20:04:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:14.065 20:04:55 -- common/autotest_common.sh@10 -- # set +x 00:13:14.065 ************************************ 00:13:14.065 END TEST accel_deomp_full_mthread 00:13:14.065 ************************************ 00:13:14.065 20:04:55 -- accel/accel.sh@124 -- # [[ n == y ]] 00:13:14.065 20:04:55 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:14.065 20:04:55 -- accel/accel.sh@137 -- # build_accel_config 00:13:14.065 20:04:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:14.065 20:04:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:14.065 20:04:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:14.065 20:04:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:14.065 20:04:55 -- common/autotest_common.sh@10 -- # set +x 00:13:14.065 20:04:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:14.065 20:04:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:14.065 20:04:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:14.065 20:04:55 -- accel/accel.sh@40 -- # local IFS=, 00:13:14.065 20:04:55 -- accel/accel.sh@41 -- # jq -r . 00:13:14.065 ************************************ 00:13:14.065 START TEST accel_dif_functional_tests 00:13:14.065 ************************************ 00:13:14.065 20:04:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:14.065 [2024-04-24 20:04:56.014318] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:14.065 [2024-04-24 20:04:56.014542] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62039 ] 00:13:14.065 [2024-04-24 20:04:56.142585] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:14.065 [2024-04-24 20:04:56.314409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.065 [2024-04-24 20:04:56.314519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.065 [2024-04-24 20:04:56.314521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.323 00:13:14.323 00:13:14.323 CUnit - A unit testing framework for C - Version 2.1-3 00:13:14.323 http://cunit.sourceforge.net/ 00:13:14.323 00:13:14.323 00:13:14.323 Suite: accel_dif 00:13:14.323 Test: verify: DIF generated, GUARD check ...passed 00:13:14.323 Test: verify: DIF generated, APPTAG check ...passed 00:13:14.323 Test: verify: DIF generated, REFTAG check ...passed 00:13:14.323 Test: verify: DIF not generated, GUARD check ...passed 00:13:14.323 Test: verify: DIF not generated, APPTAG check ...[2024-04-24 20:04:56.457028] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:14.323 [2024-04-24 20:04:56.457110] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:14.323 [2024-04-24 20:04:56.457145] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:14.323 passed 00:13:14.323 Test: verify: DIF not generated, REFTAG check ...[2024-04-24 20:04:56.457193] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:14.323 [2024-04-24 20:04:56.457217] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:14.323 passed 00:13:14.323 Test: verify: APPTAG correct, APPTAG check ...[2024-04-24 20:04:56.457239] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:14.323 passed 00:13:14.323 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-24 20:04:56.457325] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:14.323 passed 00:13:14.323 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:13:14.323 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:14.323 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:14.323 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-24 20:04:56.457539] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:14.323 passed 00:13:14.323 Test: generate copy: DIF generated, GUARD check ...passed 00:13:14.323 Test: generate copy: DIF generated, APTTAG check ...passed 00:13:14.323 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:14.323 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:14.323 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:14.323 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:14.323 Test: generate copy: iovecs-len validate ...[2024-04-24 20:04:56.457886] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:13:14.323 passed 00:13:14.323 Test: generate copy: buffer alignment validate ...passed 00:13:14.323 00:13:14.323 Run Summary: Type Total Ran Passed Failed Inactive 00:13:14.323 suites 1 1 n/a 0 0 00:13:14.323 tests 20 20 20 0 0 00:13:14.323 asserts 204 204 204 0 n/a 00:13:14.323 00:13:14.323 Elapsed time = 0.002 seconds 00:13:14.582 00:13:14.582 real 0m0.845s 00:13:14.582 user 0m1.189s 00:13:14.582 sys 0m0.212s 00:13:14.582 ************************************ 00:13:14.582 END TEST accel_dif_functional_tests 00:13:14.582 ************************************ 00:13:14.582 20:04:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:14.582 20:04:56 -- common/autotest_common.sh@10 -- # set +x 00:13:14.842 ************************************ 00:13:14.842 END TEST accel 00:13:14.842 ************************************ 00:13:14.842 00:13:14.842 real 0m36.456s 00:13:14.842 user 0m37.457s 00:13:14.842 sys 0m4.545s 00:13:14.842 20:04:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:14.842 20:04:56 -- common/autotest_common.sh@10 -- # set +x 00:13:14.842 20:04:56 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:14.842 20:04:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:14.842 20:04:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:14.842 20:04:56 -- common/autotest_common.sh@10 -- # set +x 00:13:14.842 ************************************ 00:13:14.842 START TEST accel_rpc 00:13:14.842 ************************************ 00:13:14.842 20:04:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:15.101 * Looking for test storage... 00:13:15.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:15.101 20:04:57 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:15.101 20:04:57 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62119 00:13:15.101 20:04:57 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:15.101 20:04:57 -- accel/accel_rpc.sh@15 -- # waitforlisten 62119 00:13:15.101 20:04:57 -- common/autotest_common.sh@817 -- # '[' -z 62119 ']' 00:13:15.101 20:04:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.101 20:04:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:15.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.101 20:04:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.101 20:04:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:15.101 20:04:57 -- common/autotest_common.sh@10 -- # set +x 00:13:15.101 [2024-04-24 20:04:57.203574] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:15.101 [2024-04-24 20:04:57.203643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62119 ] 00:13:15.101 [2024-04-24 20:04:57.330615] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.362 [2024-04-24 20:04:57.430418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.930 20:04:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:15.930 20:04:58 -- common/autotest_common.sh@850 -- # return 0 00:13:15.930 20:04:58 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:15.930 20:04:58 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:15.930 20:04:58 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:15.930 20:04:58 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:15.930 20:04:58 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:15.930 20:04:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:15.930 20:04:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:15.930 20:04:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.189 ************************************ 00:13:16.189 START TEST accel_assign_opcode 00:13:16.189 ************************************ 00:13:16.189 20:04:58 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:13:16.189 20:04:58 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:16.189 20:04:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.189 20:04:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.189 [2024-04-24 20:04:58.193531] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:16.189 20:04:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.189 20:04:58 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:16.189 20:04:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.189 20:04:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.189 [2024-04-24 20:04:58.205487] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:16.189 20:04:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.189 20:04:58 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:16.189 20:04:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.189 20:04:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.189 20:04:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.189 20:04:58 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:16.189 20:04:58 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:16.189 20:04:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.189 20:04:58 -- accel/accel_rpc.sh@42 -- # grep software 00:13:16.189 20:04:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.189 20:04:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.189 software 00:13:16.189 00:13:16.189 real 0m0.253s 00:13:16.189 user 0m0.053s 00:13:16.189 sys 0m0.009s 00:13:16.189 20:04:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:16.189 ************************************ 00:13:16.189 END TEST accel_assign_opcode 00:13:16.189 ************************************ 00:13:16.189 20:04:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.449 20:04:58 -- accel/accel_rpc.sh@55 -- # killprocess 62119 00:13:16.449 20:04:58 -- common/autotest_common.sh@936 -- # '[' -z 62119 ']' 00:13:16.449 20:04:58 -- common/autotest_common.sh@940 -- # kill -0 62119 00:13:16.449 20:04:58 -- common/autotest_common.sh@941 -- # uname 00:13:16.449 20:04:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:16.449 20:04:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62119 00:13:16.449 killing process with pid 62119 00:13:16.449 20:04:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:16.449 20:04:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:16.449 20:04:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62119' 00:13:16.449 20:04:58 -- common/autotest_common.sh@955 -- # kill 62119 00:13:16.449 20:04:58 -- common/autotest_common.sh@960 -- # wait 62119 00:13:16.708 00:13:16.709 real 0m1.854s 00:13:16.709 user 0m1.946s 00:13:16.709 sys 0m0.440s 00:13:16.709 ************************************ 00:13:16.709 END TEST accel_rpc 00:13:16.709 ************************************ 00:13:16.709 20:04:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:16.709 20:04:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.709 20:04:58 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:16.709 20:04:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:16.709 20:04:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:16.709 20:04:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.967 ************************************ 00:13:16.967 START TEST app_cmdline 00:13:16.968 ************************************ 00:13:16.968 20:04:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:16.968 * Looking for test storage... 00:13:16.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:16.968 20:04:59 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:16.968 20:04:59 -- app/cmdline.sh@17 -- # spdk_tgt_pid=62217 00:13:16.968 20:04:59 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:16.968 20:04:59 -- app/cmdline.sh@18 -- # waitforlisten 62217 00:13:16.968 20:04:59 -- common/autotest_common.sh@817 -- # '[' -z 62217 ']' 00:13:16.968 20:04:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.968 20:04:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:16.968 20:04:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.968 20:04:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:16.968 20:04:59 -- common/autotest_common.sh@10 -- # set +x 00:13:16.968 [2024-04-24 20:04:59.187039] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:16.968 [2024-04-24 20:04:59.187104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62217 ] 00:13:17.227 [2024-04-24 20:04:59.324111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.227 [2024-04-24 20:04:59.429809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.165 20:05:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:18.165 20:05:00 -- common/autotest_common.sh@850 -- # return 0 00:13:18.165 20:05:00 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:18.165 { 00:13:18.165 "version": "SPDK v24.05-pre git sha1 4907d1565", 00:13:18.165 "fields": { 00:13:18.165 "major": 24, 00:13:18.165 "minor": 5, 00:13:18.165 "patch": 0, 00:13:18.165 "suffix": "-pre", 00:13:18.165 "commit": "4907d1565" 00:13:18.165 } 00:13:18.165 } 00:13:18.165 20:05:00 -- app/cmdline.sh@22 -- # expected_methods=() 00:13:18.165 20:05:00 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:18.165 20:05:00 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:18.165 20:05:00 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:18.165 20:05:00 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:18.165 20:05:00 -- app/cmdline.sh@26 -- # sort 00:13:18.165 20:05:00 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:18.165 20:05:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:18.165 20:05:00 -- common/autotest_common.sh@10 -- # set +x 00:13:18.165 20:05:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:18.165 20:05:00 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:18.165 20:05:00 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:18.165 20:05:00 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:18.165 20:05:00 -- common/autotest_common.sh@638 -- # local es=0 00:13:18.165 20:05:00 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:18.165 20:05:00 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:18.165 20:05:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:18.165 20:05:00 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:18.165 20:05:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:18.165 20:05:00 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:18.165 20:05:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:18.165 20:05:00 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:18.165 20:05:00 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:18.165 20:05:00 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:18.426 request: 00:13:18.426 { 00:13:18.426 "method": "env_dpdk_get_mem_stats", 00:13:18.426 "req_id": 1 00:13:18.426 } 00:13:18.426 Got JSON-RPC error response 00:13:18.426 response: 00:13:18.426 { 00:13:18.426 "code": -32601, 00:13:18.426 "message": "Method not found" 00:13:18.426 } 00:13:18.426 20:05:00 -- common/autotest_common.sh@641 -- # es=1 00:13:18.426 20:05:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:18.426 20:05:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:18.426 20:05:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:18.426 20:05:00 -- app/cmdline.sh@1 -- # killprocess 62217 00:13:18.426 20:05:00 -- common/autotest_common.sh@936 -- # '[' -z 62217 ']' 00:13:18.426 20:05:00 -- common/autotest_common.sh@940 -- # kill -0 62217 00:13:18.426 20:05:00 -- common/autotest_common.sh@941 -- # uname 00:13:18.426 20:05:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:18.426 20:05:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62217 00:13:18.426 killing process with pid 62217 00:13:18.426 20:05:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:18.426 20:05:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:18.426 20:05:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62217' 00:13:18.426 20:05:00 -- common/autotest_common.sh@955 -- # kill 62217 00:13:18.426 20:05:00 -- common/autotest_common.sh@960 -- # wait 62217 00:13:18.996 ************************************ 00:13:18.996 END TEST app_cmdline 00:13:18.996 ************************************ 00:13:18.996 00:13:18.996 real 0m1.962s 00:13:18.996 user 0m2.400s 00:13:18.996 sys 0m0.425s 00:13:18.996 20:05:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:18.996 20:05:00 -- common/autotest_common.sh@10 -- # set +x 00:13:18.996 20:05:01 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:18.996 20:05:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:18.996 20:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:18.996 20:05:01 -- common/autotest_common.sh@10 -- # set +x 00:13:18.996 ************************************ 00:13:18.996 START TEST version 00:13:18.996 ************************************ 00:13:18.996 20:05:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:18.996 * Looking for test storage... 00:13:18.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:18.996 20:05:01 -- app/version.sh@17 -- # get_header_version major 00:13:18.996 20:05:01 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:18.996 20:05:01 -- app/version.sh@14 -- # tr -d '"' 00:13:18.996 20:05:01 -- app/version.sh@14 -- # cut -f2 00:13:18.996 20:05:01 -- app/version.sh@17 -- # major=24 00:13:18.996 20:05:01 -- app/version.sh@18 -- # get_header_version minor 00:13:18.996 20:05:01 -- app/version.sh@14 -- # cut -f2 00:13:18.996 20:05:01 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:18.996 20:05:01 -- app/version.sh@14 -- # tr -d '"' 00:13:18.996 20:05:01 -- app/version.sh@18 -- # minor=5 00:13:18.996 20:05:01 -- app/version.sh@19 -- # get_header_version patch 00:13:18.996 20:05:01 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:18.996 20:05:01 -- app/version.sh@14 -- # cut -f2 00:13:18.996 20:05:01 -- app/version.sh@14 -- # tr -d '"' 00:13:19.256 20:05:01 -- app/version.sh@19 -- # patch=0 00:13:19.256 20:05:01 -- app/version.sh@20 -- # get_header_version suffix 00:13:19.256 20:05:01 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:19.256 20:05:01 -- app/version.sh@14 -- # cut -f2 00:13:19.256 20:05:01 -- app/version.sh@14 -- # tr -d '"' 00:13:19.256 20:05:01 -- app/version.sh@20 -- # suffix=-pre 00:13:19.256 20:05:01 -- app/version.sh@22 -- # version=24.5 00:13:19.256 20:05:01 -- app/version.sh@25 -- # (( patch != 0 )) 00:13:19.256 20:05:01 -- app/version.sh@28 -- # version=24.5rc0 00:13:19.256 20:05:01 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:19.256 20:05:01 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:19.256 20:05:01 -- app/version.sh@30 -- # py_version=24.5rc0 00:13:19.256 20:05:01 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:13:19.256 00:13:19.256 real 0m0.197s 00:13:19.256 user 0m0.119s 00:13:19.256 sys 0m0.116s 00:13:19.256 20:05:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:19.256 20:05:01 -- common/autotest_common.sh@10 -- # set +x 00:13:19.256 ************************************ 00:13:19.256 END TEST version 00:13:19.256 ************************************ 00:13:19.256 20:05:01 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:13:19.256 20:05:01 -- spdk/autotest.sh@194 -- # uname -s 00:13:19.256 20:05:01 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:13:19.256 20:05:01 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:13:19.256 20:05:01 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:13:19.256 20:05:01 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:13:19.256 20:05:01 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:13:19.256 20:05:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:19.256 20:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:19.256 20:05:01 -- common/autotest_common.sh@10 -- # set +x 00:13:19.256 ************************************ 00:13:19.256 START TEST spdk_dd 00:13:19.256 ************************************ 00:13:19.256 20:05:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:13:19.537 * Looking for test storage... 00:13:19.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:13:19.537 20:05:01 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:19.537 20:05:01 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.537 20:05:01 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.537 20:05:01 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.537 20:05:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.537 20:05:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.537 20:05:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.537 20:05:01 -- paths/export.sh@5 -- # export PATH 00:13:19.537 20:05:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.537 20:05:01 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:19.795 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:19.795 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:19.795 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:20.055 20:05:02 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:13:20.055 20:05:02 -- dd/dd.sh@11 -- # nvme_in_userspace 00:13:20.055 20:05:02 -- scripts/common.sh@309 -- # local bdf bdfs 00:13:20.055 20:05:02 -- scripts/common.sh@310 -- # local nvmes 00:13:20.055 20:05:02 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:13:20.055 20:05:02 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:20.055 20:05:02 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:13:20.055 20:05:02 -- scripts/common.sh@295 -- # local bdf= 00:13:20.055 20:05:02 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:13:20.055 20:05:02 -- scripts/common.sh@230 -- # local class 00:13:20.055 20:05:02 -- scripts/common.sh@231 -- # local subclass 00:13:20.055 20:05:02 -- scripts/common.sh@232 -- # local progif 00:13:20.055 20:05:02 -- scripts/common.sh@233 -- # printf %02x 1 00:13:20.056 20:05:02 -- scripts/common.sh@233 -- # class=01 00:13:20.056 20:05:02 -- scripts/common.sh@234 -- # printf %02x 8 00:13:20.056 20:05:02 -- scripts/common.sh@234 -- # subclass=08 00:13:20.056 20:05:02 -- scripts/common.sh@235 -- # printf %02x 2 00:13:20.056 20:05:02 -- scripts/common.sh@235 -- # progif=02 00:13:20.056 20:05:02 -- scripts/common.sh@237 -- # hash lspci 00:13:20.056 20:05:02 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:13:20.056 20:05:02 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:13:20.056 20:05:02 -- scripts/common.sh@240 -- # grep -i -- -p02 00:13:20.056 20:05:02 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:20.056 20:05:02 -- scripts/common.sh@242 -- # tr -d '"' 00:13:20.056 20:05:02 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:20.056 20:05:02 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:13:20.056 20:05:02 -- scripts/common.sh@15 -- # local i 00:13:20.056 20:05:02 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:13:20.056 20:05:02 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:20.056 20:05:02 -- scripts/common.sh@24 -- # return 0 00:13:20.056 20:05:02 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:13:20.056 20:05:02 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:20.056 20:05:02 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:13:20.056 20:05:02 -- scripts/common.sh@15 -- # local i 00:13:20.056 20:05:02 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:13:20.056 20:05:02 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:20.056 20:05:02 -- scripts/common.sh@24 -- # return 0 00:13:20.056 20:05:02 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:13:20.056 20:05:02 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:20.056 20:05:02 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:20.056 20:05:02 -- scripts/common.sh@320 -- # uname -s 00:13:20.056 20:05:02 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:20.056 20:05:02 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:20.056 20:05:02 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:20.056 20:05:02 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:20.056 20:05:02 -- scripts/common.sh@320 -- # uname -s 00:13:20.056 20:05:02 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:20.056 20:05:02 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:20.056 20:05:02 -- scripts/common.sh@325 -- # (( 2 )) 00:13:20.056 20:05:02 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:13:20.056 20:05:02 -- dd/dd.sh@13 -- # check_liburing 00:13:20.056 20:05:02 -- dd/common.sh@139 -- # local lib so 00:13:20.056 20:05:02 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:13:20.056 20:05:02 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_event.so.13.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_sock.so.9.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_util.so.9.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.056 20:05:02 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:13:20.056 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.057 20:05:02 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:13:20.057 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.057 20:05:02 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:13:20.057 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.057 20:05:02 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:13:20.057 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.057 20:05:02 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:13:20.057 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.057 20:05:02 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:13:20.057 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.057 20:05:02 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:13:20.057 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.057 20:05:02 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:13:20.057 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.057 20:05:02 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:13:20.057 20:05:02 -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:20.057 20:05:02 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:13:20.057 20:05:02 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:13:20.057 * spdk_dd linked to liburing 00:13:20.057 20:05:02 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:20.057 20:05:02 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:20.057 20:05:02 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:20.057 20:05:02 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:20.057 20:05:02 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:20.057 20:05:02 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:20.057 20:05:02 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:13:20.057 20:05:02 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:20.057 20:05:02 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:20.057 20:05:02 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:20.057 20:05:02 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:20.057 20:05:02 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:20.057 20:05:02 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:20.057 20:05:02 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:20.057 20:05:02 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:20.057 20:05:02 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:20.057 20:05:02 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:20.057 20:05:02 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:20.057 20:05:02 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:20.057 20:05:02 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:20.057 20:05:02 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:20.057 20:05:02 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:20.057 20:05:02 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:20.057 20:05:02 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:20.057 20:05:02 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:20.057 20:05:02 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:20.057 20:05:02 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:20.057 20:05:02 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:20.057 20:05:02 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:20.057 20:05:02 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:20.057 20:05:02 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:20.057 20:05:02 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:20.057 20:05:02 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:20.057 20:05:02 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:20.057 20:05:02 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:20.057 20:05:02 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:20.057 20:05:02 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:20.057 20:05:02 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:20.057 20:05:02 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:20.057 20:05:02 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:20.057 20:05:02 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:20.057 20:05:02 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:20.057 20:05:02 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:20.057 20:05:02 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:20.057 20:05:02 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:20.057 20:05:02 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:20.057 20:05:02 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:20.057 20:05:02 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:13:20.057 20:05:02 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:13:20.057 20:05:02 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:20.057 20:05:02 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:13:20.057 20:05:02 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:13:20.057 20:05:02 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:13:20.057 20:05:02 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:13:20.057 20:05:02 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:13:20.057 20:05:02 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=y 00:13:20.057 20:05:02 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:13:20.057 20:05:02 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:13:20.057 20:05:02 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:13:20.057 20:05:02 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:13:20.057 20:05:02 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:13:20.057 20:05:02 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:13:20.057 20:05:02 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:13:20.057 20:05:02 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:13:20.057 20:05:02 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:13:20.057 20:05:02 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:13:20.057 20:05:02 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:13:20.057 20:05:02 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:13:20.057 20:05:02 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:13:20.057 20:05:02 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:20.057 20:05:02 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:13:20.057 20:05:02 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:13:20.057 20:05:02 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:13:20.057 20:05:02 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:13:20.057 20:05:02 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:13:20.057 20:05:02 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:13:20.057 20:05:02 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:13:20.057 20:05:02 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:13:20.057 20:05:02 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:13:20.057 20:05:02 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:13:20.057 20:05:02 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:13:20.057 20:05:02 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:20.057 20:05:02 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:13:20.057 20:05:02 -- common/build_config.sh@82 -- # CONFIG_URING=y 00:13:20.057 20:05:02 -- dd/common.sh@149 -- # [[ y != y ]] 00:13:20.057 20:05:02 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:13:20.057 20:05:02 -- dd/common.sh@156 -- # export liburing_in_use=1 00:13:20.057 20:05:02 -- dd/common.sh@156 -- # liburing_in_use=1 00:13:20.057 20:05:02 -- dd/common.sh@157 -- # return 0 00:13:20.057 20:05:02 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:13:20.057 20:05:02 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:13:20.057 20:05:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:20.057 20:05:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:20.057 20:05:02 -- common/autotest_common.sh@10 -- # set +x 00:13:20.057 ************************************ 00:13:20.057 START TEST spdk_dd_basic_rw 00:13:20.057 ************************************ 00:13:20.057 20:05:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:13:20.316 * Looking for test storage... 00:13:20.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:13:20.316 20:05:02 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:20.316 20:05:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.316 20:05:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.316 20:05:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.316 20:05:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.316 20:05:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.316 20:05:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.316 20:05:02 -- paths/export.sh@5 -- # export PATH 00:13:20.316 20:05:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.316 20:05:02 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:13:20.316 20:05:02 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:13:20.316 20:05:02 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:13:20.316 20:05:02 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:13:20.316 20:05:02 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:13:20.316 20:05:02 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:13:20.316 20:05:02 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:13:20.316 20:05:02 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:20.316 20:05:02 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:20.316 20:05:02 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:13:20.316 20:05:02 -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:13:20.316 20:05:02 -- dd/common.sh@126 -- # mapfile -t id 00:13:20.316 20:05:02 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:13:20.577 20:05:02 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:13:20.577 20:05:02 -- dd/common.sh@130 -- # lbaf=04 00:13:20.578 20:05:02 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:13:20.578 20:05:02 -- dd/common.sh@132 -- # lbaf=4096 00:13:20.578 20:05:02 -- dd/common.sh@134 -- # echo 4096 00:13:20.578 20:05:02 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:13:20.578 20:05:02 -- dd/basic_rw.sh@96 -- # gen_conf 00:13:20.578 20:05:02 -- dd/common.sh@31 -- # xtrace_disable 00:13:20.578 20:05:02 -- common/autotest_common.sh@10 -- # set +x 00:13:20.578 20:05:02 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:13:20.578 20:05:02 -- dd/basic_rw.sh@96 -- # : 00:13:20.578 20:05:02 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:13:20.578 20:05:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:20.578 20:05:02 -- common/autotest_common.sh@10 -- # set +x 00:13:20.578 { 00:13:20.578 "subsystems": [ 00:13:20.578 { 00:13:20.578 "subsystem": "bdev", 00:13:20.578 "config": [ 00:13:20.578 { 00:13:20.578 "params": { 00:13:20.578 "trtype": "pcie", 00:13:20.578 "traddr": "0000:00:10.0", 00:13:20.578 "name": "Nvme0" 00:13:20.578 }, 00:13:20.578 "method": "bdev_nvme_attach_controller" 00:13:20.578 }, 00:13:20.578 { 00:13:20.578 "method": "bdev_wait_for_examine" 00:13:20.578 } 00:13:20.578 ] 00:13:20.578 } 00:13:20.578 ] 00:13:20.578 } 00:13:20.578 ************************************ 00:13:20.578 START TEST dd_bs_lt_native_bs 00:13:20.578 ************************************ 00:13:20.578 20:05:02 -- common/autotest_common.sh@1111 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:13:20.578 20:05:02 -- common/autotest_common.sh@638 -- # local es=0 00:13:20.578 20:05:02 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:13:20.578 20:05:02 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:20.578 20:05:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:20.578 20:05:02 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:20.578 20:05:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:20.578 20:05:02 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:20.578 20:05:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:20.578 20:05:02 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:20.578 20:05:02 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:20.578 20:05:02 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:13:20.578 [2024-04-24 20:05:02.757537] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:20.578 [2024-04-24 20:05:02.757664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62564 ] 00:13:20.836 [2024-04-24 20:05:02.895028] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.836 [2024-04-24 20:05:02.993698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.094 [2024-04-24 20:05:03.128244] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:13:21.094 [2024-04-24 20:05:03.128429] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:21.094 [2024-04-24 20:05:03.231980] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:13:21.094 20:05:03 -- common/autotest_common.sh@641 -- # es=234 00:13:21.094 20:05:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:21.094 20:05:03 -- common/autotest_common.sh@650 -- # es=106 00:13:21.094 20:05:03 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:21.094 20:05:03 -- common/autotest_common.sh@658 -- # es=1 00:13:21.094 20:05:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:21.094 00:13:21.094 real 0m0.642s 00:13:21.094 user 0m0.411s 00:13:21.094 sys 0m0.124s 00:13:21.094 20:05:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:21.094 20:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:21.094 ************************************ 00:13:21.094 END TEST dd_bs_lt_native_bs 00:13:21.094 ************************************ 00:13:21.353 20:05:03 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:13:21.353 20:05:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:21.353 20:05:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:21.353 20:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:21.353 ************************************ 00:13:21.353 START TEST dd_rw 00:13:21.353 ************************************ 00:13:21.353 20:05:03 -- common/autotest_common.sh@1111 -- # basic_rw 4096 00:13:21.353 20:05:03 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:13:21.353 20:05:03 -- dd/basic_rw.sh@12 -- # local count size 00:13:21.353 20:05:03 -- dd/basic_rw.sh@13 -- # local qds bss 00:13:21.353 20:05:03 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:13:21.353 20:05:03 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:13:21.353 20:05:03 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:13:21.353 20:05:03 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:13:21.353 20:05:03 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:13:21.353 20:05:03 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:13:21.353 20:05:03 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:13:21.353 20:05:03 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:13:21.353 20:05:03 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:21.353 20:05:03 -- dd/basic_rw.sh@23 -- # count=15 00:13:21.353 20:05:03 -- dd/basic_rw.sh@24 -- # count=15 00:13:21.353 20:05:03 -- dd/basic_rw.sh@25 -- # size=61440 00:13:21.353 20:05:03 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:13:21.353 20:05:03 -- dd/common.sh@98 -- # xtrace_disable 00:13:21.353 20:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:21.922 20:05:03 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:13:21.922 20:05:03 -- dd/basic_rw.sh@30 -- # gen_conf 00:13:21.922 20:05:03 -- dd/common.sh@31 -- # xtrace_disable 00:13:21.922 20:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:21.922 [2024-04-24 20:05:03.962879] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:21.922 [2024-04-24 20:05:03.963772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62599 ] 00:13:21.922 { 00:13:21.922 "subsystems": [ 00:13:21.922 { 00:13:21.922 "subsystem": "bdev", 00:13:21.922 "config": [ 00:13:21.922 { 00:13:21.922 "params": { 00:13:21.922 "trtype": "pcie", 00:13:21.922 "traddr": "0000:00:10.0", 00:13:21.922 "name": "Nvme0" 00:13:21.922 }, 00:13:21.922 "method": "bdev_nvme_attach_controller" 00:13:21.922 }, 00:13:21.922 { 00:13:21.922 "method": "bdev_wait_for_examine" 00:13:21.922 } 00:13:21.922 ] 00:13:21.922 } 00:13:21.922 ] 00:13:21.922 } 00:13:21.922 [2024-04-24 20:05:04.108719] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.181 [2024-04-24 20:05:04.251806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.441  Copying: 60/60 [kB] (average 29 MBps) 00:13:22.441 00:13:22.441 20:05:04 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:13:22.441 20:05:04 -- dd/basic_rw.sh@37 -- # gen_conf 00:13:22.441 20:05:04 -- dd/common.sh@31 -- # xtrace_disable 00:13:22.441 20:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:22.441 [2024-04-24 20:05:04.649494] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:22.441 [2024-04-24 20:05:04.649565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62618 ] 00:13:22.441 { 00:13:22.441 "subsystems": [ 00:13:22.441 { 00:13:22.441 "subsystem": "bdev", 00:13:22.441 "config": [ 00:13:22.441 { 00:13:22.441 "params": { 00:13:22.441 "trtype": "pcie", 00:13:22.441 "traddr": "0000:00:10.0", 00:13:22.441 "name": "Nvme0" 00:13:22.441 }, 00:13:22.441 "method": "bdev_nvme_attach_controller" 00:13:22.441 }, 00:13:22.441 { 00:13:22.441 "method": "bdev_wait_for_examine" 00:13:22.441 } 00:13:22.441 ] 00:13:22.441 } 00:13:22.441 ] 00:13:22.441 } 00:13:22.700 [2024-04-24 20:05:04.786576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.700 [2024-04-24 20:05:04.886516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.218  Copying: 60/60 [kB] (average 29 MBps) 00:13:23.218 00:13:23.218 20:05:05 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:23.218 20:05:05 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:13:23.218 20:05:05 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:23.218 20:05:05 -- dd/common.sh@11 -- # local nvme_ref= 00:13:23.218 20:05:05 -- dd/common.sh@12 -- # local size=61440 00:13:23.218 20:05:05 -- dd/common.sh@14 -- # local bs=1048576 00:13:23.218 20:05:05 -- dd/common.sh@15 -- # local count=1 00:13:23.218 20:05:05 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:23.218 20:05:05 -- dd/common.sh@18 -- # gen_conf 00:13:23.218 20:05:05 -- dd/common.sh@31 -- # xtrace_disable 00:13:23.218 20:05:05 -- common/autotest_common.sh@10 -- # set +x 00:13:23.218 [2024-04-24 20:05:05.290591] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:23.218 [2024-04-24 20:05:05.291028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62628 ] 00:13:23.218 { 00:13:23.218 "subsystems": [ 00:13:23.218 { 00:13:23.218 "subsystem": "bdev", 00:13:23.218 "config": [ 00:13:23.218 { 00:13:23.218 "params": { 00:13:23.218 "trtype": "pcie", 00:13:23.218 "traddr": "0000:00:10.0", 00:13:23.218 "name": "Nvme0" 00:13:23.218 }, 00:13:23.218 "method": "bdev_nvme_attach_controller" 00:13:23.218 }, 00:13:23.218 { 00:13:23.218 "method": "bdev_wait_for_examine" 00:13:23.218 } 00:13:23.218 ] 00:13:23.218 } 00:13:23.218 ] 00:13:23.218 } 00:13:23.218 [2024-04-24 20:05:05.426297] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.476 [2024-04-24 20:05:05.531481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.736  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:23.736 00:13:23.736 20:05:05 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:23.736 20:05:05 -- dd/basic_rw.sh@23 -- # count=15 00:13:23.736 20:05:05 -- dd/basic_rw.sh@24 -- # count=15 00:13:23.736 20:05:05 -- dd/basic_rw.sh@25 -- # size=61440 00:13:23.736 20:05:05 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:13:23.736 20:05:05 -- dd/common.sh@98 -- # xtrace_disable 00:13:23.736 20:05:05 -- common/autotest_common.sh@10 -- # set +x 00:13:24.302 20:05:06 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:13:24.302 20:05:06 -- dd/basic_rw.sh@30 -- # gen_conf 00:13:24.302 20:05:06 -- dd/common.sh@31 -- # xtrace_disable 00:13:24.302 20:05:06 -- common/autotest_common.sh@10 -- # set +x 00:13:24.302 [2024-04-24 20:05:06.449816] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:24.302 [2024-04-24 20:05:06.449963] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62653 ] 00:13:24.302 { 00:13:24.302 "subsystems": [ 00:13:24.302 { 00:13:24.302 "subsystem": "bdev", 00:13:24.302 "config": [ 00:13:24.302 { 00:13:24.302 "params": { 00:13:24.302 "trtype": "pcie", 00:13:24.302 "traddr": "0000:00:10.0", 00:13:24.302 "name": "Nvme0" 00:13:24.302 }, 00:13:24.302 "method": "bdev_nvme_attach_controller" 00:13:24.302 }, 00:13:24.302 { 00:13:24.302 "method": "bdev_wait_for_examine" 00:13:24.303 } 00:13:24.303 ] 00:13:24.303 } 00:13:24.303 ] 00:13:24.303 } 00:13:24.561 [2024-04-24 20:05:06.589124] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.561 [2024-04-24 20:05:06.694225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.819  Copying: 60/60 [kB] (average 58 MBps) 00:13:24.819 00:13:24.819 20:05:07 -- dd/basic_rw.sh@37 -- # gen_conf 00:13:24.819 20:05:07 -- dd/common.sh@31 -- # xtrace_disable 00:13:24.819 20:05:07 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:13:24.819 20:05:07 -- common/autotest_common.sh@10 -- # set +x 00:13:25.078 [2024-04-24 20:05:07.106859] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:25.078 [2024-04-24 20:05:07.107028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62666 ] 00:13:25.078 { 00:13:25.078 "subsystems": [ 00:13:25.078 { 00:13:25.078 "subsystem": "bdev", 00:13:25.078 "config": [ 00:13:25.078 { 00:13:25.078 "params": { 00:13:25.078 "trtype": "pcie", 00:13:25.078 "traddr": "0000:00:10.0", 00:13:25.078 "name": "Nvme0" 00:13:25.078 }, 00:13:25.078 "method": "bdev_nvme_attach_controller" 00:13:25.078 }, 00:13:25.078 { 00:13:25.078 "method": "bdev_wait_for_examine" 00:13:25.078 } 00:13:25.078 ] 00:13:25.078 } 00:13:25.078 ] 00:13:25.078 } 00:13:25.078 [2024-04-24 20:05:07.247160] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.338 [2024-04-24 20:05:07.351253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.598  Copying: 60/60 [kB] (average 58 MBps) 00:13:25.598 00:13:25.598 20:05:07 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:25.598 20:05:07 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:13:25.598 20:05:07 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:25.598 20:05:07 -- dd/common.sh@11 -- # local nvme_ref= 00:13:25.598 20:05:07 -- dd/common.sh@12 -- # local size=61440 00:13:25.598 20:05:07 -- dd/common.sh@14 -- # local bs=1048576 00:13:25.598 20:05:07 -- dd/common.sh@15 -- # local count=1 00:13:25.598 20:05:07 -- dd/common.sh@18 -- # gen_conf 00:13:25.598 20:05:07 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:25.598 20:05:07 -- dd/common.sh@31 -- # xtrace_disable 00:13:25.598 20:05:07 -- common/autotest_common.sh@10 -- # set +x 00:13:25.598 [2024-04-24 20:05:07.760917] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:25.598 [2024-04-24 20:05:07.760987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62687 ] 00:13:25.598 { 00:13:25.598 "subsystems": [ 00:13:25.598 { 00:13:25.598 "subsystem": "bdev", 00:13:25.598 "config": [ 00:13:25.598 { 00:13:25.598 "params": { 00:13:25.598 "trtype": "pcie", 00:13:25.598 "traddr": "0000:00:10.0", 00:13:25.598 "name": "Nvme0" 00:13:25.598 }, 00:13:25.598 "method": "bdev_nvme_attach_controller" 00:13:25.598 }, 00:13:25.598 { 00:13:25.598 "method": "bdev_wait_for_examine" 00:13:25.598 } 00:13:25.598 ] 00:13:25.598 } 00:13:25.598 ] 00:13:25.598 } 00:13:25.857 [2024-04-24 20:05:07.895921] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.857 [2024-04-24 20:05:07.992631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.116  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:26.116 00:13:26.116 20:05:08 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:13:26.116 20:05:08 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:26.116 20:05:08 -- dd/basic_rw.sh@23 -- # count=7 00:13:26.116 20:05:08 -- dd/basic_rw.sh@24 -- # count=7 00:13:26.116 20:05:08 -- dd/basic_rw.sh@25 -- # size=57344 00:13:26.116 20:05:08 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:13:26.116 20:05:08 -- dd/common.sh@98 -- # xtrace_disable 00:13:26.116 20:05:08 -- common/autotest_common.sh@10 -- # set +x 00:13:26.692 20:05:08 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:13:26.692 20:05:08 -- dd/basic_rw.sh@30 -- # gen_conf 00:13:26.692 20:05:08 -- dd/common.sh@31 -- # xtrace_disable 00:13:26.692 20:05:08 -- common/autotest_common.sh@10 -- # set +x 00:13:26.692 [2024-04-24 20:05:08.814417] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:26.692 [2024-04-24 20:05:08.814733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62706 ] 00:13:26.692 { 00:13:26.692 "subsystems": [ 00:13:26.692 { 00:13:26.692 "subsystem": "bdev", 00:13:26.692 "config": [ 00:13:26.692 { 00:13:26.692 "params": { 00:13:26.692 "trtype": "pcie", 00:13:26.692 "traddr": "0000:00:10.0", 00:13:26.692 "name": "Nvme0" 00:13:26.692 }, 00:13:26.692 "method": "bdev_nvme_attach_controller" 00:13:26.692 }, 00:13:26.692 { 00:13:26.692 "method": "bdev_wait_for_examine" 00:13:26.692 } 00:13:26.692 ] 00:13:26.692 } 00:13:26.692 ] 00:13:26.692 } 00:13:26.951 [2024-04-24 20:05:08.960726] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.951 [2024-04-24 20:05:09.063089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.210  Copying: 56/56 [kB] (average 27 MBps) 00:13:27.210 00:13:27.210 20:05:09 -- dd/basic_rw.sh@37 -- # gen_conf 00:13:27.210 20:05:09 -- dd/common.sh@31 -- # xtrace_disable 00:13:27.210 20:05:09 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:13:27.210 20:05:09 -- common/autotest_common.sh@10 -- # set +x 00:13:27.470 [2024-04-24 20:05:09.473281] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:27.470 [2024-04-24 20:05:09.473489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62722 ] 00:13:27.470 { 00:13:27.470 "subsystems": [ 00:13:27.470 { 00:13:27.470 "subsystem": "bdev", 00:13:27.470 "config": [ 00:13:27.470 { 00:13:27.470 "params": { 00:13:27.470 "trtype": "pcie", 00:13:27.470 "traddr": "0000:00:10.0", 00:13:27.470 "name": "Nvme0" 00:13:27.470 }, 00:13:27.470 "method": "bdev_nvme_attach_controller" 00:13:27.470 }, 00:13:27.470 { 00:13:27.470 "method": "bdev_wait_for_examine" 00:13:27.470 } 00:13:27.470 ] 00:13:27.470 } 00:13:27.470 ] 00:13:27.470 } 00:13:27.470 [2024-04-24 20:05:09.612865] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.470 [2024-04-24 20:05:09.712669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.005  Copying: 56/56 [kB] (average 54 MBps) 00:13:28.005 00:13:28.005 20:05:10 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:28.005 20:05:10 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:13:28.005 20:05:10 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:28.005 20:05:10 -- dd/common.sh@11 -- # local nvme_ref= 00:13:28.005 20:05:10 -- dd/common.sh@12 -- # local size=57344 00:13:28.005 20:05:10 -- dd/common.sh@14 -- # local bs=1048576 00:13:28.005 20:05:10 -- dd/common.sh@15 -- # local count=1 00:13:28.005 20:05:10 -- dd/common.sh@18 -- # gen_conf 00:13:28.005 20:05:10 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:28.005 20:05:10 -- dd/common.sh@31 -- # xtrace_disable 00:13:28.005 20:05:10 -- common/autotest_common.sh@10 -- # set +x 00:13:28.005 [2024-04-24 20:05:10.110308] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:28.005 [2024-04-24 20:05:10.110389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62737 ] 00:13:28.005 { 00:13:28.005 "subsystems": [ 00:13:28.005 { 00:13:28.005 "subsystem": "bdev", 00:13:28.005 "config": [ 00:13:28.005 { 00:13:28.005 "params": { 00:13:28.005 "trtype": "pcie", 00:13:28.005 "traddr": "0000:00:10.0", 00:13:28.005 "name": "Nvme0" 00:13:28.005 }, 00:13:28.005 "method": "bdev_nvme_attach_controller" 00:13:28.005 }, 00:13:28.005 { 00:13:28.005 "method": "bdev_wait_for_examine" 00:13:28.005 } 00:13:28.005 ] 00:13:28.005 } 00:13:28.005 ] 00:13:28.005 } 00:13:28.005 [2024-04-24 20:05:10.239015] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.264 [2024-04-24 20:05:10.337117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.524  Copying: 1024/1024 [kB] (average 500 MBps) 00:13:28.524 00:13:28.524 20:05:10 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:28.524 20:05:10 -- dd/basic_rw.sh@23 -- # count=7 00:13:28.524 20:05:10 -- dd/basic_rw.sh@24 -- # count=7 00:13:28.524 20:05:10 -- dd/basic_rw.sh@25 -- # size=57344 00:13:28.524 20:05:10 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:13:28.524 20:05:10 -- dd/common.sh@98 -- # xtrace_disable 00:13:28.524 20:05:10 -- common/autotest_common.sh@10 -- # set +x 00:13:29.093 20:05:11 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:13:29.094 20:05:11 -- dd/basic_rw.sh@30 -- # gen_conf 00:13:29.094 20:05:11 -- dd/common.sh@31 -- # xtrace_disable 00:13:29.094 20:05:11 -- common/autotest_common.sh@10 -- # set +x 00:13:29.094 { 00:13:29.094 "subsystems": [ 00:13:29.094 { 00:13:29.094 "subsystem": "bdev", 00:13:29.094 "config": [ 00:13:29.094 { 00:13:29.094 "params": { 00:13:29.094 "trtype": "pcie", 00:13:29.094 "traddr": "0000:00:10.0", 00:13:29.094 "name": "Nvme0" 00:13:29.094 }, 00:13:29.094 "method": "bdev_nvme_attach_controller" 00:13:29.094 }, 00:13:29.094 { 00:13:29.094 "method": "bdev_wait_for_examine" 00:13:29.094 } 00:13:29.094 ] 00:13:29.094 } 00:13:29.094 ] 00:13:29.094 } 00:13:29.094 [2024-04-24 20:05:11.131892] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:29.094 [2024-04-24 20:05:11.132078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62756 ] 00:13:29.094 [2024-04-24 20:05:11.273600] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.353 [2024-04-24 20:05:11.376232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.612  Copying: 56/56 [kB] (average 54 MBps) 00:13:29.612 00:13:29.612 20:05:11 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:13:29.612 20:05:11 -- dd/basic_rw.sh@37 -- # gen_conf 00:13:29.612 20:05:11 -- dd/common.sh@31 -- # xtrace_disable 00:13:29.612 20:05:11 -- common/autotest_common.sh@10 -- # set +x 00:13:29.612 [2024-04-24 20:05:11.780805] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:29.612 [2024-04-24 20:05:11.780911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62770 ] 00:13:29.612 { 00:13:29.612 "subsystems": [ 00:13:29.612 { 00:13:29.612 "subsystem": "bdev", 00:13:29.612 "config": [ 00:13:29.612 { 00:13:29.612 "params": { 00:13:29.612 "trtype": "pcie", 00:13:29.612 "traddr": "0000:00:10.0", 00:13:29.612 "name": "Nvme0" 00:13:29.612 }, 00:13:29.612 "method": "bdev_nvme_attach_controller" 00:13:29.612 }, 00:13:29.612 { 00:13:29.612 "method": "bdev_wait_for_examine" 00:13:29.612 } 00:13:29.612 ] 00:13:29.612 } 00:13:29.612 ] 00:13:29.612 } 00:13:29.871 [2024-04-24 20:05:11.917980] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.871 [2024-04-24 20:05:12.008944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.131  Copying: 56/56 [kB] (average 54 MBps) 00:13:30.131 00:13:30.131 20:05:12 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:30.131 20:05:12 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:13:30.131 20:05:12 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:30.131 20:05:12 -- dd/common.sh@11 -- # local nvme_ref= 00:13:30.131 20:05:12 -- dd/common.sh@12 -- # local size=57344 00:13:30.131 20:05:12 -- dd/common.sh@14 -- # local bs=1048576 00:13:30.131 20:05:12 -- dd/common.sh@15 -- # local count=1 00:13:30.131 20:05:12 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:30.131 20:05:12 -- dd/common.sh@18 -- # gen_conf 00:13:30.131 20:05:12 -- dd/common.sh@31 -- # xtrace_disable 00:13:30.131 20:05:12 -- common/autotest_common.sh@10 -- # set +x 00:13:30.464 [2024-04-24 20:05:12.419132] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:30.464 [2024-04-24 20:05:12.419337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62785 ] 00:13:30.464 { 00:13:30.464 "subsystems": [ 00:13:30.464 { 00:13:30.464 "subsystem": "bdev", 00:13:30.464 "config": [ 00:13:30.464 { 00:13:30.464 "params": { 00:13:30.464 "trtype": "pcie", 00:13:30.464 "traddr": "0000:00:10.0", 00:13:30.464 "name": "Nvme0" 00:13:30.464 }, 00:13:30.464 "method": "bdev_nvme_attach_controller" 00:13:30.464 }, 00:13:30.464 { 00:13:30.464 "method": "bdev_wait_for_examine" 00:13:30.464 } 00:13:30.464 ] 00:13:30.464 } 00:13:30.464 ] 00:13:30.464 } 00:13:30.464 [2024-04-24 20:05:12.559498] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.464 [2024-04-24 20:05:12.661306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.983  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:30.983 00:13:30.983 20:05:13 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:13:30.983 20:05:13 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:30.983 20:05:13 -- dd/basic_rw.sh@23 -- # count=3 00:13:30.983 20:05:13 -- dd/basic_rw.sh@24 -- # count=3 00:13:30.983 20:05:13 -- dd/basic_rw.sh@25 -- # size=49152 00:13:30.983 20:05:13 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:13:30.983 20:05:13 -- dd/common.sh@98 -- # xtrace_disable 00:13:30.983 20:05:13 -- common/autotest_common.sh@10 -- # set +x 00:13:31.242 20:05:13 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:13:31.242 20:05:13 -- dd/basic_rw.sh@30 -- # gen_conf 00:13:31.242 20:05:13 -- dd/common.sh@31 -- # xtrace_disable 00:13:31.242 20:05:13 -- common/autotest_common.sh@10 -- # set +x 00:13:31.242 [2024-04-24 20:05:13.392988] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:31.242 [2024-04-24 20:05:13.393061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62804 ] 00:13:31.242 { 00:13:31.242 "subsystems": [ 00:13:31.242 { 00:13:31.242 "subsystem": "bdev", 00:13:31.242 "config": [ 00:13:31.242 { 00:13:31.242 "params": { 00:13:31.242 "trtype": "pcie", 00:13:31.242 "traddr": "0000:00:10.0", 00:13:31.242 "name": "Nvme0" 00:13:31.242 }, 00:13:31.242 "method": "bdev_nvme_attach_controller" 00:13:31.242 }, 00:13:31.242 { 00:13:31.242 "method": "bdev_wait_for_examine" 00:13:31.242 } 00:13:31.242 ] 00:13:31.242 } 00:13:31.242 ] 00:13:31.242 } 00:13:31.501 [2024-04-24 20:05:13.531183] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.501 [2024-04-24 20:05:13.628186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.760  Copying: 48/48 [kB] (average 46 MBps) 00:13:31.760 00:13:31.760 20:05:13 -- dd/basic_rw.sh@37 -- # gen_conf 00:13:31.760 20:05:13 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:13:31.760 20:05:13 -- dd/common.sh@31 -- # xtrace_disable 00:13:31.760 20:05:13 -- common/autotest_common.sh@10 -- # set +x 00:13:32.020 [2024-04-24 20:05:14.022369] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:32.020 [2024-04-24 20:05:14.022453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62819 ] 00:13:32.020 { 00:13:32.020 "subsystems": [ 00:13:32.020 { 00:13:32.020 "subsystem": "bdev", 00:13:32.020 "config": [ 00:13:32.020 { 00:13:32.020 "params": { 00:13:32.020 "trtype": "pcie", 00:13:32.020 "traddr": "0000:00:10.0", 00:13:32.020 "name": "Nvme0" 00:13:32.020 }, 00:13:32.020 "method": "bdev_nvme_attach_controller" 00:13:32.020 }, 00:13:32.020 { 00:13:32.020 "method": "bdev_wait_for_examine" 00:13:32.020 } 00:13:32.020 ] 00:13:32.020 } 00:13:32.020 ] 00:13:32.020 } 00:13:32.020 [2024-04-24 20:05:14.160573] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.020 [2024-04-24 20:05:14.262338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.550  Copying: 48/48 [kB] (average 46 MBps) 00:13:32.550 00:13:32.550 20:05:14 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:32.550 20:05:14 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:13:32.550 20:05:14 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:32.550 20:05:14 -- dd/common.sh@11 -- # local nvme_ref= 00:13:32.550 20:05:14 -- dd/common.sh@12 -- # local size=49152 00:13:32.550 20:05:14 -- dd/common.sh@14 -- # local bs=1048576 00:13:32.550 20:05:14 -- dd/common.sh@15 -- # local count=1 00:13:32.550 20:05:14 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:32.550 20:05:14 -- dd/common.sh@18 -- # gen_conf 00:13:32.550 20:05:14 -- dd/common.sh@31 -- # xtrace_disable 00:13:32.550 20:05:14 -- common/autotest_common.sh@10 -- # set +x 00:13:32.550 [2024-04-24 20:05:14.676915] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:32.550 [2024-04-24 20:05:14.677061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62839 ] 00:13:32.550 { 00:13:32.550 "subsystems": [ 00:13:32.550 { 00:13:32.550 "subsystem": "bdev", 00:13:32.550 "config": [ 00:13:32.550 { 00:13:32.550 "params": { 00:13:32.550 "trtype": "pcie", 00:13:32.550 "traddr": "0000:00:10.0", 00:13:32.550 "name": "Nvme0" 00:13:32.550 }, 00:13:32.550 "method": "bdev_nvme_attach_controller" 00:13:32.550 }, 00:13:32.550 { 00:13:32.550 "method": "bdev_wait_for_examine" 00:13:32.550 } 00:13:32.550 ] 00:13:32.550 } 00:13:32.550 ] 00:13:32.550 } 00:13:32.819 [2024-04-24 20:05:14.816057] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.819 [2024-04-24 20:05:14.913593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.090  Copying: 1024/1024 [kB] (average 500 MBps) 00:13:33.090 00:13:33.090 20:05:15 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:33.090 20:05:15 -- dd/basic_rw.sh@23 -- # count=3 00:13:33.090 20:05:15 -- dd/basic_rw.sh@24 -- # count=3 00:13:33.090 20:05:15 -- dd/basic_rw.sh@25 -- # size=49152 00:13:33.090 20:05:15 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:13:33.090 20:05:15 -- dd/common.sh@98 -- # xtrace_disable 00:13:33.090 20:05:15 -- common/autotest_common.sh@10 -- # set +x 00:13:33.362 20:05:15 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:13:33.362 20:05:15 -- dd/basic_rw.sh@30 -- # gen_conf 00:13:33.362 20:05:15 -- dd/common.sh@31 -- # xtrace_disable 00:13:33.362 20:05:15 -- common/autotest_common.sh@10 -- # set +x 00:13:33.636 [2024-04-24 20:05:15.653081] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:33.636 [2024-04-24 20:05:15.653237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62858 ] 00:13:33.636 { 00:13:33.636 "subsystems": [ 00:13:33.636 { 00:13:33.636 "subsystem": "bdev", 00:13:33.636 "config": [ 00:13:33.636 { 00:13:33.636 "params": { 00:13:33.636 "trtype": "pcie", 00:13:33.636 "traddr": "0000:00:10.0", 00:13:33.636 "name": "Nvme0" 00:13:33.636 }, 00:13:33.636 "method": "bdev_nvme_attach_controller" 00:13:33.636 }, 00:13:33.636 { 00:13:33.636 "method": "bdev_wait_for_examine" 00:13:33.636 } 00:13:33.636 ] 00:13:33.636 } 00:13:33.636 ] 00:13:33.636 } 00:13:33.636 [2024-04-24 20:05:15.789983] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.636 [2024-04-24 20:05:15.886692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.221  Copying: 48/48 [kB] (average 46 MBps) 00:13:34.221 00:13:34.221 20:05:16 -- dd/basic_rw.sh@37 -- # gen_conf 00:13:34.221 20:05:16 -- dd/common.sh@31 -- # xtrace_disable 00:13:34.221 20:05:16 -- common/autotest_common.sh@10 -- # set +x 00:13:34.221 20:05:16 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:13:34.221 [2024-04-24 20:05:16.285067] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:34.221 [2024-04-24 20:05:16.285143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62871 ] 00:13:34.221 { 00:13:34.221 "subsystems": [ 00:13:34.221 { 00:13:34.221 "subsystem": "bdev", 00:13:34.221 "config": [ 00:13:34.221 { 00:13:34.221 "params": { 00:13:34.221 "trtype": "pcie", 00:13:34.221 "traddr": "0000:00:10.0", 00:13:34.221 "name": "Nvme0" 00:13:34.221 }, 00:13:34.221 "method": "bdev_nvme_attach_controller" 00:13:34.221 }, 00:13:34.221 { 00:13:34.221 "method": "bdev_wait_for_examine" 00:13:34.221 } 00:13:34.221 ] 00:13:34.221 } 00:13:34.221 ] 00:13:34.221 } 00:13:34.221 [2024-04-24 20:05:16.424287] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.480 [2024-04-24 20:05:16.521096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.739  Copying: 48/48 [kB] (average 46 MBps) 00:13:34.739 00:13:34.739 20:05:16 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:34.739 20:05:16 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:13:34.739 20:05:16 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:34.739 20:05:16 -- dd/common.sh@11 -- # local nvme_ref= 00:13:34.739 20:05:16 -- dd/common.sh@12 -- # local size=49152 00:13:34.739 20:05:16 -- dd/common.sh@14 -- # local bs=1048576 00:13:34.739 20:05:16 -- dd/common.sh@15 -- # local count=1 00:13:34.739 20:05:16 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:34.739 20:05:16 -- dd/common.sh@18 -- # gen_conf 00:13:34.739 20:05:16 -- dd/common.sh@31 -- # xtrace_disable 00:13:34.739 20:05:16 -- common/autotest_common.sh@10 -- # set +x 00:13:34.739 [2024-04-24 20:05:16.928175] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:34.739 [2024-04-24 20:05:16.928310] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62887 ] 00:13:34.739 { 00:13:34.739 "subsystems": [ 00:13:34.739 { 00:13:34.739 "subsystem": "bdev", 00:13:34.739 "config": [ 00:13:34.739 { 00:13:34.739 "params": { 00:13:34.739 "trtype": "pcie", 00:13:34.739 "traddr": "0000:00:10.0", 00:13:34.739 "name": "Nvme0" 00:13:34.739 }, 00:13:34.739 "method": "bdev_nvme_attach_controller" 00:13:34.739 }, 00:13:34.739 { 00:13:34.739 "method": "bdev_wait_for_examine" 00:13:34.739 } 00:13:34.739 ] 00:13:34.739 } 00:13:34.739 ] 00:13:34.739 } 00:13:34.999 [2024-04-24 20:05:17.065421] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.999 [2024-04-24 20:05:17.160189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.258  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:35.258 00:13:35.258 00:13:35.258 real 0m14.036s 00:13:35.258 user 0m10.555s 00:13:35.258 sys 0m4.621s 00:13:35.258 20:05:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:35.258 20:05:17 -- common/autotest_common.sh@10 -- # set +x 00:13:35.258 ************************************ 00:13:35.258 END TEST dd_rw 00:13:35.258 ************************************ 00:13:35.522 20:05:17 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:13:35.522 20:05:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:35.522 20:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:35.522 20:05:17 -- common/autotest_common.sh@10 -- # set +x 00:13:35.522 ************************************ 00:13:35.522 START TEST dd_rw_offset 00:13:35.522 ************************************ 00:13:35.522 20:05:17 -- common/autotest_common.sh@1111 -- # basic_offset 00:13:35.522 20:05:17 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:13:35.522 20:05:17 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:13:35.522 20:05:17 -- dd/common.sh@98 -- # xtrace_disable 00:13:35.522 20:05:17 -- common/autotest_common.sh@10 -- # set +x 00:13:35.522 20:05:17 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:13:35.522 20:05:17 -- dd/basic_rw.sh@56 -- # data=tvfwmmgmju8tcbax3eob6jhb5zagapukf2p9y08u1hq67fszsfau5a8fpmczjnk7544ri6fiiz9rocbunqwvqtstqlsnc8bezen5qodom1rhfqrwq1go7drnxiey52bjemk48ls8xw5ixutvx2nzpg4r3ggzfaynp26kmcs91ateq8h1xvwkwtbq8g3i8cyip4221dl75gpdfcxb17xj6kg28rokznqn1r4beu6uiz7q8z5mnh8xwb0ell3n97vlcqidy28496vf7bjgh3tdr5ivnc31zw2bs708r6g34nqrprl35osn1q2mrzd1oag3r8manwzehrsom6wnl276jiu4ok1dhui2tqr3fddsfgv2mxfnmn9iswe4ugltz57wby6m8ba9szpuyzgm59nl8v8g2tmgihn5p0irqeiule8ahz7b5sp90oenuf4f9f3w8cj5jwmsx5qhit4wrjflst6nyax4h0ejkppl8auchs67gtj0b1ykp7hbo6qcqyrtze8sdum4t9hgiqpzt31zn5j8fxn01k51tzwfz4qu5zaf06ztfjz4kyq8083l8np53jfs7cwlvc5sb2m7l8fvjtahy3dyhake9hhqp2z1wpiucf3zo9l10f9cal5xw4sazat3ijqitoz3zb2ctw9jrranngoaeh8nt2p97x47u59u0bdg47n0vgyvlc94xy4qn9xb7xizpbcbnwafj5g163o649p3dmxq1la2gztlyqfcdys8uxpl3nxdvpih3aj8t1z0urgjxswsdkxrkipngy99qxddo6kvyisxhm2l7eux81f52lsn34wh4l7v3vanpw71d0cecgelemcqukxpk3p18a8owuzd2goreyjrnuc45220j92geud4yaw4mw95fytiixu7p93y38u3l23yjpe6e5jn0rkfurxwqu5bxgkgip7s0rbh8c9poz6ftcep1sislmbek2wq89382edogrowpsruiq2zecjpky8a3q4nyyr1sdvv0qkfsvgog4evjf4lguyya49nokmjngo33yadq8to81aeexnsgot30zd5tqran7l14y47vbhl479qnqwpzjan0y0x3jv681f6oqgwsbw9ev61u9uoc5tzp4b161rfw75khikeyu1hu5y8y23if524u8j2gj2tkiko7bkppofao1s7i2yo0h3xkod6cb96oscdifkx6o0lfay0ugs6kt8sm29t5e1d6upp8s64g0kasn6dd0oalt8knzz1hggfog7rkd1toeu1azbhoa6kkp8hsjh1aq4w5grf2yeckkggzct2sleuo6ddov1ggm2uuoqsbigxggzpmy0c1bdjov0844kb7pvtkj2rr8ogieoqms0gjqnhsipz90rs2k4yka2biodrjgjpb0iuyno7ctxgu032xv9pwp02lhv15ryzt9kk79b2r2nnm6or6qnqvt70hiluzy01zequypcuw8u9frjrrl1rz0wz9vbnxj5ssbw82vct7tyexdpbaxjzofebp8l9o7mxu3ztyes761ev4r3vbr5cnas7j0ao2ssbrv37r390yf8h66wrhfx2a78fxp23kbokxn81q8p5k5rlvbb7qeknjxgr6dltxxnsqzx6x8pjgr2gjx4qsq1xuem3gisci50bnxfaztsv448n2h7oyknkk4r9gmxzve0ysrn4ehqi983p5cog0os3muhlqa78a7151mpxhcqjqaxsreo0ee8v4lm1yb5f0gejp8aw38u9fueaxcf7tunyoelt4s2yl9zhi7484cdm7gb9qerhobabu2ts33deymmjd0tu4h9uiyxr1o53d8u0lw0h2squ8iqplp72umdor8sx6a60grm3xrv261ma6knidb3hta3szvzbboeznbtwpm9plqzvfne31me943dfzbgmfe4naby1fks9uk96zdhghrszq9rahivgbw7lerqj7kwtgk0h8kegnt064ezbr8ov08gvkmu1vkd6ogx78myabeud137i1n7v62v47g9594jllrqg4czh60yjj3jvg4wav8iozolk16wvrcmr1g5z5m2n9rj7lgmp601p8uwk7u3zp2pekaos7aon97jbxzvwx9qoae911nikpglfpfr2wqmrh6wtwn6p39geppq7oi6khnv9u64xukl37y4pl68m6uzy5on21g0dv3l2qo6rf1pcny4nljl37z148557n6zbbce5zc2y4dsgzdrv49xw3wnyes1sdupvxjszvzn3yqd1v3bdniyrx8tobnagemrriai2bfvoljxwpz2sylbg9ucutqub82ht44phht45da7n8ow9ik2xo4hk237nhw0sxl0v0vmfb3x776vinodnx0f7kygwd0nfryoal8loniefnv0skgioihqybe5r4hwujhzelq6yksdkn160s1pq9mkmj167b9f4pivowzevlxe7b2rxxsuski898poo29jtbhskhhqnmvebgaix9xj99w9vlxzpjs70ujyzgc559nc4j3oz54v5c43m9uk9j65xop33bm2rzj9wdne4wcx15k44djkif8y7g5dk1ad4kuxcqfmrib2e0pzsbdj28ilvrl6gvfwgophz6zyvz114pf59l0e3h68d8eo5wtxp7ggxajznbjkyrl1lr4v5sowl17im8yskrmz1d2e1wppywbx1r75ebadimgp0gw816wh6ggjl1247t85cfpdxb29cre8zh1mysa0fxwf6f0x4eby13dmwmqn7l3jd95vi93xdzzz7wf5n3azdftfacgwowl351uo9ak3wtmc1lklywdyxjuale43rjazuh4oarvnxqr0gcbmjyg2whih5t3k74kes5xmu9jzw71hru2a38usk23qspx44iwzx5rmzme6z906a8bbo0nqen682vrl1vtbbk7koynejfrve25xti32qer11oo5xplz6yx33zrcaa7319qi52l639vp8c6x3nqh9ujtu5v0ye94awcpo00bprgtb4m8sz8ykegqbfzan2yy6gxlhqds3yo8ngho0mmw42222g3l3g1rb6rr6za4z9mu957zzatcvief0ta03o9ewldc4vmoo0aii18ju246y9nb2quxrr805tyil5r9jdnwiidx05u3ian5jlkf36ktnoc8mg4jntm2rrokt2gwt4ni25h9r0qnul8jjgwr32yvxm3pnvevb99z9wcp4k09krpxrjif6i645kzo1qbwmf05r2bcihd665r8e95ta9y0g9jp7dk0pkojlta93ipq2srltgvbc3fqtysk0vtpivo877r1t71e815ovegv3t9n9dqlsn4a0cbnubyxbwnrg55a4omf6sxdpi2nlrdxti1i3em9ncpp4xthdqhyxz9eeflpnayjqbtfqhnaoh1ex7wu0e1tr1kis25v8tp61ld7xo8vx599vkeitdkdsc2oixl8disu4a7b2i4dp9fzen89rmyqfr6hkmbnquefw00nbl35z9i2bt714k6aix1dqljpw36r2tfa9untzbojmxm98gq8keupbbi841y7fndu4b9pmoeyr4ix9iggk0a7qtfom1fh4z18hurgpy4fmv5k04h3sh9alp8v9sy8yz2ojbucyg0civay4gro4yzm05wcb393hf8dmbinbhfvuzxp3woea2yjp5us553ixy63wq3zub3a3stpd6y6hlk2yj3ouzk55pplzbrf1m3ljvyd00v7e0trjih4c00wwnk0f5cnn8bures7r2qiso14j1ns0p96076t241s9goz6qd3yupi9vz60qnxczxuwlr2q8iz8w867j86hv6kbkqkkf5mmfk5bw2fd7neh0utxd74kmwt7oz5jdpluzaqkm51rnce01abnevdn7xlfzsfu1sei917pkcx0hiycjkxc33jfmq8r5brbpttd3517g2yscf23arzy3g32uamr5aecpx5utn199hzgj18pg4nm2dwx20tx0gdokbxe9p3e4gjz4n2if43wrpafg4m5tqesm09xiewypl6x2nvfpb27vhcsyxoza33k6cbmctkpz99bm4er 00:13:35.522 20:05:17 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:13:35.522 20:05:17 -- dd/basic_rw.sh@59 -- # gen_conf 00:13:35.522 20:05:17 -- dd/common.sh@31 -- # xtrace_disable 00:13:35.522 20:05:17 -- common/autotest_common.sh@10 -- # set +x 00:13:35.522 [2024-04-24 20:05:17.744006] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:35.522 [2024-04-24 20:05:17.744133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62927 ] 00:13:35.522 { 00:13:35.522 "subsystems": [ 00:13:35.522 { 00:13:35.522 "subsystem": "bdev", 00:13:35.522 "config": [ 00:13:35.522 { 00:13:35.522 "params": { 00:13:35.522 "trtype": "pcie", 00:13:35.522 "traddr": "0000:00:10.0", 00:13:35.522 "name": "Nvme0" 00:13:35.522 }, 00:13:35.522 "method": "bdev_nvme_attach_controller" 00:13:35.522 }, 00:13:35.522 { 00:13:35.522 "method": "bdev_wait_for_examine" 00:13:35.522 } 00:13:35.522 ] 00:13:35.522 } 00:13:35.522 ] 00:13:35.522 } 00:13:35.788 [2024-04-24 20:05:17.879748] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.788 [2024-04-24 20:05:17.974953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.306  Copying: 4096/4096 [B] (average 4000 kBps) 00:13:36.306 00:13:36.306 20:05:18 -- dd/basic_rw.sh@65 -- # gen_conf 00:13:36.306 20:05:18 -- dd/common.sh@31 -- # xtrace_disable 00:13:36.306 20:05:18 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:13:36.306 20:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.306 [2024-04-24 20:05:18.365827] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:36.306 [2024-04-24 20:05:18.365915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62940 ] 00:13:36.306 { 00:13:36.306 "subsystems": [ 00:13:36.306 { 00:13:36.306 "subsystem": "bdev", 00:13:36.306 "config": [ 00:13:36.306 { 00:13:36.306 "params": { 00:13:36.306 "trtype": "pcie", 00:13:36.306 "traddr": "0000:00:10.0", 00:13:36.306 "name": "Nvme0" 00:13:36.306 }, 00:13:36.306 "method": "bdev_nvme_attach_controller" 00:13:36.306 }, 00:13:36.306 { 00:13:36.306 "method": "bdev_wait_for_examine" 00:13:36.306 } 00:13:36.306 ] 00:13:36.306 } 00:13:36.306 ] 00:13:36.306 } 00:13:36.306 [2024-04-24 20:05:18.506000] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.566 [2024-04-24 20:05:18.600513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.826  Copying: 4096/4096 [B] (average 4000 kBps) 00:13:36.826 00:13:36.826 20:05:18 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:13:36.826 ************************************ 00:13:36.826 END TEST dd_rw_offset 00:13:36.826 ************************************ 00:13:36.827 20:05:18 -- dd/basic_rw.sh@72 -- # [[ tvfwmmgmju8tcbax3eob6jhb5zagapukf2p9y08u1hq67fszsfau5a8fpmczjnk7544ri6fiiz9rocbunqwvqtstqlsnc8bezen5qodom1rhfqrwq1go7drnxiey52bjemk48ls8xw5ixutvx2nzpg4r3ggzfaynp26kmcs91ateq8h1xvwkwtbq8g3i8cyip4221dl75gpdfcxb17xj6kg28rokznqn1r4beu6uiz7q8z5mnh8xwb0ell3n97vlcqidy28496vf7bjgh3tdr5ivnc31zw2bs708r6g34nqrprl35osn1q2mrzd1oag3r8manwzehrsom6wnl276jiu4ok1dhui2tqr3fddsfgv2mxfnmn9iswe4ugltz57wby6m8ba9szpuyzgm59nl8v8g2tmgihn5p0irqeiule8ahz7b5sp90oenuf4f9f3w8cj5jwmsx5qhit4wrjflst6nyax4h0ejkppl8auchs67gtj0b1ykp7hbo6qcqyrtze8sdum4t9hgiqpzt31zn5j8fxn01k51tzwfz4qu5zaf06ztfjz4kyq8083l8np53jfs7cwlvc5sb2m7l8fvjtahy3dyhake9hhqp2z1wpiucf3zo9l10f9cal5xw4sazat3ijqitoz3zb2ctw9jrranngoaeh8nt2p97x47u59u0bdg47n0vgyvlc94xy4qn9xb7xizpbcbnwafj5g163o649p3dmxq1la2gztlyqfcdys8uxpl3nxdvpih3aj8t1z0urgjxswsdkxrkipngy99qxddo6kvyisxhm2l7eux81f52lsn34wh4l7v3vanpw71d0cecgelemcqukxpk3p18a8owuzd2goreyjrnuc45220j92geud4yaw4mw95fytiixu7p93y38u3l23yjpe6e5jn0rkfurxwqu5bxgkgip7s0rbh8c9poz6ftcep1sislmbek2wq89382edogrowpsruiq2zecjpky8a3q4nyyr1sdvv0qkfsvgog4evjf4lguyya49nokmjngo33yadq8to81aeexnsgot30zd5tqran7l14y47vbhl479qnqwpzjan0y0x3jv681f6oqgwsbw9ev61u9uoc5tzp4b161rfw75khikeyu1hu5y8y23if524u8j2gj2tkiko7bkppofao1s7i2yo0h3xkod6cb96oscdifkx6o0lfay0ugs6kt8sm29t5e1d6upp8s64g0kasn6dd0oalt8knzz1hggfog7rkd1toeu1azbhoa6kkp8hsjh1aq4w5grf2yeckkggzct2sleuo6ddov1ggm2uuoqsbigxggzpmy0c1bdjov0844kb7pvtkj2rr8ogieoqms0gjqnhsipz90rs2k4yka2biodrjgjpb0iuyno7ctxgu032xv9pwp02lhv15ryzt9kk79b2r2nnm6or6qnqvt70hiluzy01zequypcuw8u9frjrrl1rz0wz9vbnxj5ssbw82vct7tyexdpbaxjzofebp8l9o7mxu3ztyes761ev4r3vbr5cnas7j0ao2ssbrv37r390yf8h66wrhfx2a78fxp23kbokxn81q8p5k5rlvbb7qeknjxgr6dltxxnsqzx6x8pjgr2gjx4qsq1xuem3gisci50bnxfaztsv448n2h7oyknkk4r9gmxzve0ysrn4ehqi983p5cog0os3muhlqa78a7151mpxhcqjqaxsreo0ee8v4lm1yb5f0gejp8aw38u9fueaxcf7tunyoelt4s2yl9zhi7484cdm7gb9qerhobabu2ts33deymmjd0tu4h9uiyxr1o53d8u0lw0h2squ8iqplp72umdor8sx6a60grm3xrv261ma6knidb3hta3szvzbboeznbtwpm9plqzvfne31me943dfzbgmfe4naby1fks9uk96zdhghrszq9rahivgbw7lerqj7kwtgk0h8kegnt064ezbr8ov08gvkmu1vkd6ogx78myabeud137i1n7v62v47g9594jllrqg4czh60yjj3jvg4wav8iozolk16wvrcmr1g5z5m2n9rj7lgmp601p8uwk7u3zp2pekaos7aon97jbxzvwx9qoae911nikpglfpfr2wqmrh6wtwn6p39geppq7oi6khnv9u64xukl37y4pl68m6uzy5on21g0dv3l2qo6rf1pcny4nljl37z148557n6zbbce5zc2y4dsgzdrv49xw3wnyes1sdupvxjszvzn3yqd1v3bdniyrx8tobnagemrriai2bfvoljxwpz2sylbg9ucutqub82ht44phht45da7n8ow9ik2xo4hk237nhw0sxl0v0vmfb3x776vinodnx0f7kygwd0nfryoal8loniefnv0skgioihqybe5r4hwujhzelq6yksdkn160s1pq9mkmj167b9f4pivowzevlxe7b2rxxsuski898poo29jtbhskhhqnmvebgaix9xj99w9vlxzpjs70ujyzgc559nc4j3oz54v5c43m9uk9j65xop33bm2rzj9wdne4wcx15k44djkif8y7g5dk1ad4kuxcqfmrib2e0pzsbdj28ilvrl6gvfwgophz6zyvz114pf59l0e3h68d8eo5wtxp7ggxajznbjkyrl1lr4v5sowl17im8yskrmz1d2e1wppywbx1r75ebadimgp0gw816wh6ggjl1247t85cfpdxb29cre8zh1mysa0fxwf6f0x4eby13dmwmqn7l3jd95vi93xdzzz7wf5n3azdftfacgwowl351uo9ak3wtmc1lklywdyxjuale43rjazuh4oarvnxqr0gcbmjyg2whih5t3k74kes5xmu9jzw71hru2a38usk23qspx44iwzx5rmzme6z906a8bbo0nqen682vrl1vtbbk7koynejfrve25xti32qer11oo5xplz6yx33zrcaa7319qi52l639vp8c6x3nqh9ujtu5v0ye94awcpo00bprgtb4m8sz8ykegqbfzan2yy6gxlhqds3yo8ngho0mmw42222g3l3g1rb6rr6za4z9mu957zzatcvief0ta03o9ewldc4vmoo0aii18ju246y9nb2quxrr805tyil5r9jdnwiidx05u3ian5jlkf36ktnoc8mg4jntm2rrokt2gwt4ni25h9r0qnul8jjgwr32yvxm3pnvevb99z9wcp4k09krpxrjif6i645kzo1qbwmf05r2bcihd665r8e95ta9y0g9jp7dk0pkojlta93ipq2srltgvbc3fqtysk0vtpivo877r1t71e815ovegv3t9n9dqlsn4a0cbnubyxbwnrg55a4omf6sxdpi2nlrdxti1i3em9ncpp4xthdqhyxz9eeflpnayjqbtfqhnaoh1ex7wu0e1tr1kis25v8tp61ld7xo8vx599vkeitdkdsc2oixl8disu4a7b2i4dp9fzen89rmyqfr6hkmbnquefw00nbl35z9i2bt714k6aix1dqljpw36r2tfa9untzbojmxm98gq8keupbbi841y7fndu4b9pmoeyr4ix9iggk0a7qtfom1fh4z18hurgpy4fmv5k04h3sh9alp8v9sy8yz2ojbucyg0civay4gro4yzm05wcb393hf8dmbinbhfvuzxp3woea2yjp5us553ixy63wq3zub3a3stpd6y6hlk2yj3ouzk55pplzbrf1m3ljvyd00v7e0trjih4c00wwnk0f5cnn8bures7r2qiso14j1ns0p96076t241s9goz6qd3yupi9vz60qnxczxuwlr2q8iz8w867j86hv6kbkqkkf5mmfk5bw2fd7neh0utxd74kmwt7oz5jdpluzaqkm51rnce01abnevdn7xlfzsfu1sei917pkcx0hiycjkxc33jfmq8r5brbpttd3517g2yscf23arzy3g32uamr5aecpx5utn199hzgj18pg4nm2dwx20tx0gdokbxe9p3e4gjz4n2if43wrpafg4m5tqesm09xiewypl6x2nvfpb27vhcsyxoza33k6cbmctkpz99bm4er == \t\v\f\w\m\m\g\m\j\u\8\t\c\b\a\x\3\e\o\b\6\j\h\b\5\z\a\g\a\p\u\k\f\2\p\9\y\0\8\u\1\h\q\6\7\f\s\z\s\f\a\u\5\a\8\f\p\m\c\z\j\n\k\7\5\4\4\r\i\6\f\i\i\z\9\r\o\c\b\u\n\q\w\v\q\t\s\t\q\l\s\n\c\8\b\e\z\e\n\5\q\o\d\o\m\1\r\h\f\q\r\w\q\1\g\o\7\d\r\n\x\i\e\y\5\2\b\j\e\m\k\4\8\l\s\8\x\w\5\i\x\u\t\v\x\2\n\z\p\g\4\r\3\g\g\z\f\a\y\n\p\2\6\k\m\c\s\9\1\a\t\e\q\8\h\1\x\v\w\k\w\t\b\q\8\g\3\i\8\c\y\i\p\4\2\2\1\d\l\7\5\g\p\d\f\c\x\b\1\7\x\j\6\k\g\2\8\r\o\k\z\n\q\n\1\r\4\b\e\u\6\u\i\z\7\q\8\z\5\m\n\h\8\x\w\b\0\e\l\l\3\n\9\7\v\l\c\q\i\d\y\2\8\4\9\6\v\f\7\b\j\g\h\3\t\d\r\5\i\v\n\c\3\1\z\w\2\b\s\7\0\8\r\6\g\3\4\n\q\r\p\r\l\3\5\o\s\n\1\q\2\m\r\z\d\1\o\a\g\3\r\8\m\a\n\w\z\e\h\r\s\o\m\6\w\n\l\2\7\6\j\i\u\4\o\k\1\d\h\u\i\2\t\q\r\3\f\d\d\s\f\g\v\2\m\x\f\n\m\n\9\i\s\w\e\4\u\g\l\t\z\5\7\w\b\y\6\m\8\b\a\9\s\z\p\u\y\z\g\m\5\9\n\l\8\v\8\g\2\t\m\g\i\h\n\5\p\0\i\r\q\e\i\u\l\e\8\a\h\z\7\b\5\s\p\9\0\o\e\n\u\f\4\f\9\f\3\w\8\c\j\5\j\w\m\s\x\5\q\h\i\t\4\w\r\j\f\l\s\t\6\n\y\a\x\4\h\0\e\j\k\p\p\l\8\a\u\c\h\s\6\7\g\t\j\0\b\1\y\k\p\7\h\b\o\6\q\c\q\y\r\t\z\e\8\s\d\u\m\4\t\9\h\g\i\q\p\z\t\3\1\z\n\5\j\8\f\x\n\0\1\k\5\1\t\z\w\f\z\4\q\u\5\z\a\f\0\6\z\t\f\j\z\4\k\y\q\8\0\8\3\l\8\n\p\5\3\j\f\s\7\c\w\l\v\c\5\s\b\2\m\7\l\8\f\v\j\t\a\h\y\3\d\y\h\a\k\e\9\h\h\q\p\2\z\1\w\p\i\u\c\f\3\z\o\9\l\1\0\f\9\c\a\l\5\x\w\4\s\a\z\a\t\3\i\j\q\i\t\o\z\3\z\b\2\c\t\w\9\j\r\r\a\n\n\g\o\a\e\h\8\n\t\2\p\9\7\x\4\7\u\5\9\u\0\b\d\g\4\7\n\0\v\g\y\v\l\c\9\4\x\y\4\q\n\9\x\b\7\x\i\z\p\b\c\b\n\w\a\f\j\5\g\1\6\3\o\6\4\9\p\3\d\m\x\q\1\l\a\2\g\z\t\l\y\q\f\c\d\y\s\8\u\x\p\l\3\n\x\d\v\p\i\h\3\a\j\8\t\1\z\0\u\r\g\j\x\s\w\s\d\k\x\r\k\i\p\n\g\y\9\9\q\x\d\d\o\6\k\v\y\i\s\x\h\m\2\l\7\e\u\x\8\1\f\5\2\l\s\n\3\4\w\h\4\l\7\v\3\v\a\n\p\w\7\1\d\0\c\e\c\g\e\l\e\m\c\q\u\k\x\p\k\3\p\1\8\a\8\o\w\u\z\d\2\g\o\r\e\y\j\r\n\u\c\4\5\2\2\0\j\9\2\g\e\u\d\4\y\a\w\4\m\w\9\5\f\y\t\i\i\x\u\7\p\9\3\y\3\8\u\3\l\2\3\y\j\p\e\6\e\5\j\n\0\r\k\f\u\r\x\w\q\u\5\b\x\g\k\g\i\p\7\s\0\r\b\h\8\c\9\p\o\z\6\f\t\c\e\p\1\s\i\s\l\m\b\e\k\2\w\q\8\9\3\8\2\e\d\o\g\r\o\w\p\s\r\u\i\q\2\z\e\c\j\p\k\y\8\a\3\q\4\n\y\y\r\1\s\d\v\v\0\q\k\f\s\v\g\o\g\4\e\v\j\f\4\l\g\u\y\y\a\4\9\n\o\k\m\j\n\g\o\3\3\y\a\d\q\8\t\o\8\1\a\e\e\x\n\s\g\o\t\3\0\z\d\5\t\q\r\a\n\7\l\1\4\y\4\7\v\b\h\l\4\7\9\q\n\q\w\p\z\j\a\n\0\y\0\x\3\j\v\6\8\1\f\6\o\q\g\w\s\b\w\9\e\v\6\1\u\9\u\o\c\5\t\z\p\4\b\1\6\1\r\f\w\7\5\k\h\i\k\e\y\u\1\h\u\5\y\8\y\2\3\i\f\5\2\4\u\8\j\2\g\j\2\t\k\i\k\o\7\b\k\p\p\o\f\a\o\1\s\7\i\2\y\o\0\h\3\x\k\o\d\6\c\b\9\6\o\s\c\d\i\f\k\x\6\o\0\l\f\a\y\0\u\g\s\6\k\t\8\s\m\2\9\t\5\e\1\d\6\u\p\p\8\s\6\4\g\0\k\a\s\n\6\d\d\0\o\a\l\t\8\k\n\z\z\1\h\g\g\f\o\g\7\r\k\d\1\t\o\e\u\1\a\z\b\h\o\a\6\k\k\p\8\h\s\j\h\1\a\q\4\w\5\g\r\f\2\y\e\c\k\k\g\g\z\c\t\2\s\l\e\u\o\6\d\d\o\v\1\g\g\m\2\u\u\o\q\s\b\i\g\x\g\g\z\p\m\y\0\c\1\b\d\j\o\v\0\8\4\4\k\b\7\p\v\t\k\j\2\r\r\8\o\g\i\e\o\q\m\s\0\g\j\q\n\h\s\i\p\z\9\0\r\s\2\k\4\y\k\a\2\b\i\o\d\r\j\g\j\p\b\0\i\u\y\n\o\7\c\t\x\g\u\0\3\2\x\v\9\p\w\p\0\2\l\h\v\1\5\r\y\z\t\9\k\k\7\9\b\2\r\2\n\n\m\6\o\r\6\q\n\q\v\t\7\0\h\i\l\u\z\y\0\1\z\e\q\u\y\p\c\u\w\8\u\9\f\r\j\r\r\l\1\r\z\0\w\z\9\v\b\n\x\j\5\s\s\b\w\8\2\v\c\t\7\t\y\e\x\d\p\b\a\x\j\z\o\f\e\b\p\8\l\9\o\7\m\x\u\3\z\t\y\e\s\7\6\1\e\v\4\r\3\v\b\r\5\c\n\a\s\7\j\0\a\o\2\s\s\b\r\v\3\7\r\3\9\0\y\f\8\h\6\6\w\r\h\f\x\2\a\7\8\f\x\p\2\3\k\b\o\k\x\n\8\1\q\8\p\5\k\5\r\l\v\b\b\7\q\e\k\n\j\x\g\r\6\d\l\t\x\x\n\s\q\z\x\6\x\8\p\j\g\r\2\g\j\x\4\q\s\q\1\x\u\e\m\3\g\i\s\c\i\5\0\b\n\x\f\a\z\t\s\v\4\4\8\n\2\h\7\o\y\k\n\k\k\4\r\9\g\m\x\z\v\e\0\y\s\r\n\4\e\h\q\i\9\8\3\p\5\c\o\g\0\o\s\3\m\u\h\l\q\a\7\8\a\7\1\5\1\m\p\x\h\c\q\j\q\a\x\s\r\e\o\0\e\e\8\v\4\l\m\1\y\b\5\f\0\g\e\j\p\8\a\w\3\8\u\9\f\u\e\a\x\c\f\7\t\u\n\y\o\e\l\t\4\s\2\y\l\9\z\h\i\7\4\8\4\c\d\m\7\g\b\9\q\e\r\h\o\b\a\b\u\2\t\s\3\3\d\e\y\m\m\j\d\0\t\u\4\h\9\u\i\y\x\r\1\o\5\3\d\8\u\0\l\w\0\h\2\s\q\u\8\i\q\p\l\p\7\2\u\m\d\o\r\8\s\x\6\a\6\0\g\r\m\3\x\r\v\2\6\1\m\a\6\k\n\i\d\b\3\h\t\a\3\s\z\v\z\b\b\o\e\z\n\b\t\w\p\m\9\p\l\q\z\v\f\n\e\3\1\m\e\9\4\3\d\f\z\b\g\m\f\e\4\n\a\b\y\1\f\k\s\9\u\k\9\6\z\d\h\g\h\r\s\z\q\9\r\a\h\i\v\g\b\w\7\l\e\r\q\j\7\k\w\t\g\k\0\h\8\k\e\g\n\t\0\6\4\e\z\b\r\8\o\v\0\8\g\v\k\m\u\1\v\k\d\6\o\g\x\7\8\m\y\a\b\e\u\d\1\3\7\i\1\n\7\v\6\2\v\4\7\g\9\5\9\4\j\l\l\r\q\g\4\c\z\h\6\0\y\j\j\3\j\v\g\4\w\a\v\8\i\o\z\o\l\k\1\6\w\v\r\c\m\r\1\g\5\z\5\m\2\n\9\r\j\7\l\g\m\p\6\0\1\p\8\u\w\k\7\u\3\z\p\2\p\e\k\a\o\s\7\a\o\n\9\7\j\b\x\z\v\w\x\9\q\o\a\e\9\1\1\n\i\k\p\g\l\f\p\f\r\2\w\q\m\r\h\6\w\t\w\n\6\p\3\9\g\e\p\p\q\7\o\i\6\k\h\n\v\9\u\6\4\x\u\k\l\3\7\y\4\p\l\6\8\m\6\u\z\y\5\o\n\2\1\g\0\d\v\3\l\2\q\o\6\r\f\1\p\c\n\y\4\n\l\j\l\3\7\z\1\4\8\5\5\7\n\6\z\b\b\c\e\5\z\c\2\y\4\d\s\g\z\d\r\v\4\9\x\w\3\w\n\y\e\s\1\s\d\u\p\v\x\j\s\z\v\z\n\3\y\q\d\1\v\3\b\d\n\i\y\r\x\8\t\o\b\n\a\g\e\m\r\r\i\a\i\2\b\f\v\o\l\j\x\w\p\z\2\s\y\l\b\g\9\u\c\u\t\q\u\b\8\2\h\t\4\4\p\h\h\t\4\5\d\a\7\n\8\o\w\9\i\k\2\x\o\4\h\k\2\3\7\n\h\w\0\s\x\l\0\v\0\v\m\f\b\3\x\7\7\6\v\i\n\o\d\n\x\0\f\7\k\y\g\w\d\0\n\f\r\y\o\a\l\8\l\o\n\i\e\f\n\v\0\s\k\g\i\o\i\h\q\y\b\e\5\r\4\h\w\u\j\h\z\e\l\q\6\y\k\s\d\k\n\1\6\0\s\1\p\q\9\m\k\m\j\1\6\7\b\9\f\4\p\i\v\o\w\z\e\v\l\x\e\7\b\2\r\x\x\s\u\s\k\i\8\9\8\p\o\o\2\9\j\t\b\h\s\k\h\h\q\n\m\v\e\b\g\a\i\x\9\x\j\9\9\w\9\v\l\x\z\p\j\s\7\0\u\j\y\z\g\c\5\5\9\n\c\4\j\3\o\z\5\4\v\5\c\4\3\m\9\u\k\9\j\6\5\x\o\p\3\3\b\m\2\r\z\j\9\w\d\n\e\4\w\c\x\1\5\k\4\4\d\j\k\i\f\8\y\7\g\5\d\k\1\a\d\4\k\u\x\c\q\f\m\r\i\b\2\e\0\p\z\s\b\d\j\2\8\i\l\v\r\l\6\g\v\f\w\g\o\p\h\z\6\z\y\v\z\1\1\4\p\f\5\9\l\0\e\3\h\6\8\d\8\e\o\5\w\t\x\p\7\g\g\x\a\j\z\n\b\j\k\y\r\l\1\l\r\4\v\5\s\o\w\l\1\7\i\m\8\y\s\k\r\m\z\1\d\2\e\1\w\p\p\y\w\b\x\1\r\7\5\e\b\a\d\i\m\g\p\0\g\w\8\1\6\w\h\6\g\g\j\l\1\2\4\7\t\8\5\c\f\p\d\x\b\2\9\c\r\e\8\z\h\1\m\y\s\a\0\f\x\w\f\6\f\0\x\4\e\b\y\1\3\d\m\w\m\q\n\7\l\3\j\d\9\5\v\i\9\3\x\d\z\z\z\7\w\f\5\n\3\a\z\d\f\t\f\a\c\g\w\o\w\l\3\5\1\u\o\9\a\k\3\w\t\m\c\1\l\k\l\y\w\d\y\x\j\u\a\l\e\4\3\r\j\a\z\u\h\4\o\a\r\v\n\x\q\r\0\g\c\b\m\j\y\g\2\w\h\i\h\5\t\3\k\7\4\k\e\s\5\x\m\u\9\j\z\w\7\1\h\r\u\2\a\3\8\u\s\k\2\3\q\s\p\x\4\4\i\w\z\x\5\r\m\z\m\e\6\z\9\0\6\a\8\b\b\o\0\n\q\e\n\6\8\2\v\r\l\1\v\t\b\b\k\7\k\o\y\n\e\j\f\r\v\e\2\5\x\t\i\3\2\q\e\r\1\1\o\o\5\x\p\l\z\6\y\x\3\3\z\r\c\a\a\7\3\1\9\q\i\5\2\l\6\3\9\v\p\8\c\6\x\3\n\q\h\9\u\j\t\u\5\v\0\y\e\9\4\a\w\c\p\o\0\0\b\p\r\g\t\b\4\m\8\s\z\8\y\k\e\g\q\b\f\z\a\n\2\y\y\6\g\x\l\h\q\d\s\3\y\o\8\n\g\h\o\0\m\m\w\4\2\2\2\2\g\3\l\3\g\1\r\b\6\r\r\6\z\a\4\z\9\m\u\9\5\7\z\z\a\t\c\v\i\e\f\0\t\a\0\3\o\9\e\w\l\d\c\4\v\m\o\o\0\a\i\i\1\8\j\u\2\4\6\y\9\n\b\2\q\u\x\r\r\8\0\5\t\y\i\l\5\r\9\j\d\n\w\i\i\d\x\0\5\u\3\i\a\n\5\j\l\k\f\3\6\k\t\n\o\c\8\m\g\4\j\n\t\m\2\r\r\o\k\t\2\g\w\t\4\n\i\2\5\h\9\r\0\q\n\u\l\8\j\j\g\w\r\3\2\y\v\x\m\3\p\n\v\e\v\b\9\9\z\9\w\c\p\4\k\0\9\k\r\p\x\r\j\i\f\6\i\6\4\5\k\z\o\1\q\b\w\m\f\0\5\r\2\b\c\i\h\d\6\6\5\r\8\e\9\5\t\a\9\y\0\g\9\j\p\7\d\k\0\p\k\o\j\l\t\a\9\3\i\p\q\2\s\r\l\t\g\v\b\c\3\f\q\t\y\s\k\0\v\t\p\i\v\o\8\7\7\r\1\t\7\1\e\8\1\5\o\v\e\g\v\3\t\9\n\9\d\q\l\s\n\4\a\0\c\b\n\u\b\y\x\b\w\n\r\g\5\5\a\4\o\m\f\6\s\x\d\p\i\2\n\l\r\d\x\t\i\1\i\3\e\m\9\n\c\p\p\4\x\t\h\d\q\h\y\x\z\9\e\e\f\l\p\n\a\y\j\q\b\t\f\q\h\n\a\o\h\1\e\x\7\w\u\0\e\1\t\r\1\k\i\s\2\5\v\8\t\p\6\1\l\d\7\x\o\8\v\x\5\9\9\v\k\e\i\t\d\k\d\s\c\2\o\i\x\l\8\d\i\s\u\4\a\7\b\2\i\4\d\p\9\f\z\e\n\8\9\r\m\y\q\f\r\6\h\k\m\b\n\q\u\e\f\w\0\0\n\b\l\3\5\z\9\i\2\b\t\7\1\4\k\6\a\i\x\1\d\q\l\j\p\w\3\6\r\2\t\f\a\9\u\n\t\z\b\o\j\m\x\m\9\8\g\q\8\k\e\u\p\b\b\i\8\4\1\y\7\f\n\d\u\4\b\9\p\m\o\e\y\r\4\i\x\9\i\g\g\k\0\a\7\q\t\f\o\m\1\f\h\4\z\1\8\h\u\r\g\p\y\4\f\m\v\5\k\0\4\h\3\s\h\9\a\l\p\8\v\9\s\y\8\y\z\2\o\j\b\u\c\y\g\0\c\i\v\a\y\4\g\r\o\4\y\z\m\0\5\w\c\b\3\9\3\h\f\8\d\m\b\i\n\b\h\f\v\u\z\x\p\3\w\o\e\a\2\y\j\p\5\u\s\5\5\3\i\x\y\6\3\w\q\3\z\u\b\3\a\3\s\t\p\d\6\y\6\h\l\k\2\y\j\3\o\u\z\k\5\5\p\p\l\z\b\r\f\1\m\3\l\j\v\y\d\0\0\v\7\e\0\t\r\j\i\h\4\c\0\0\w\w\n\k\0\f\5\c\n\n\8\b\u\r\e\s\7\r\2\q\i\s\o\1\4\j\1\n\s\0\p\9\6\0\7\6\t\2\4\1\s\9\g\o\z\6\q\d\3\y\u\p\i\9\v\z\6\0\q\n\x\c\z\x\u\w\l\r\2\q\8\i\z\8\w\8\6\7\j\8\6\h\v\6\k\b\k\q\k\k\f\5\m\m\f\k\5\b\w\2\f\d\7\n\e\h\0\u\t\x\d\7\4\k\m\w\t\7\o\z\5\j\d\p\l\u\z\a\q\k\m\5\1\r\n\c\e\0\1\a\b\n\e\v\d\n\7\x\l\f\z\s\f\u\1\s\e\i\9\1\7\p\k\c\x\0\h\i\y\c\j\k\x\c\3\3\j\f\m\q\8\r\5\b\r\b\p\t\t\d\3\5\1\7\g\2\y\s\c\f\2\3\a\r\z\y\3\g\3\2\u\a\m\r\5\a\e\c\p\x\5\u\t\n\1\9\9\h\z\g\j\1\8\p\g\4\n\m\2\d\w\x\2\0\t\x\0\g\d\o\k\b\x\e\9\p\3\e\4\g\j\z\4\n\2\i\f\4\3\w\r\p\a\f\g\4\m\5\t\q\e\s\m\0\9\x\i\e\w\y\p\l\6\x\2\n\v\f\p\b\2\7\v\h\c\s\y\x\o\z\a\3\3\k\6\c\b\m\c\t\k\p\z\9\9\b\m\4\e\r ]] 00:13:36.827 00:13:36.827 real 0m1.298s 00:13:36.827 user 0m0.961s 00:13:36.827 sys 0m0.479s 00:13:36.827 20:05:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:36.827 20:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:36.827 20:05:18 -- dd/basic_rw.sh@1 -- # cleanup 00:13:36.827 20:05:19 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:13:36.827 20:05:19 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:36.827 20:05:19 -- dd/common.sh@11 -- # local nvme_ref= 00:13:36.827 20:05:19 -- dd/common.sh@12 -- # local size=0xffff 00:13:36.827 20:05:19 -- dd/common.sh@14 -- # local bs=1048576 00:13:36.827 20:05:19 -- dd/common.sh@15 -- # local count=1 00:13:36.827 20:05:19 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:36.827 20:05:19 -- dd/common.sh@18 -- # gen_conf 00:13:36.827 20:05:19 -- dd/common.sh@31 -- # xtrace_disable 00:13:36.827 20:05:19 -- common/autotest_common.sh@10 -- # set +x 00:13:36.827 [2024-04-24 20:05:19.050736] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:36.827 [2024-04-24 20:05:19.050878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62970 ] 00:13:36.827 { 00:13:36.827 "subsystems": [ 00:13:36.827 { 00:13:36.827 "subsystem": "bdev", 00:13:36.827 "config": [ 00:13:36.827 { 00:13:36.827 "params": { 00:13:36.827 "trtype": "pcie", 00:13:36.827 "traddr": "0000:00:10.0", 00:13:36.827 "name": "Nvme0" 00:13:36.827 }, 00:13:36.827 "method": "bdev_nvme_attach_controller" 00:13:36.827 }, 00:13:36.827 { 00:13:36.827 "method": "bdev_wait_for_examine" 00:13:36.827 } 00:13:36.827 ] 00:13:36.827 } 00:13:36.827 ] 00:13:36.827 } 00:13:37.086 [2024-04-24 20:05:19.185596] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.086 [2024-04-24 20:05:19.283028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.604  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:37.604 00:13:37.604 20:05:19 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:37.604 ************************************ 00:13:37.604 END TEST spdk_dd_basic_rw 00:13:37.604 ************************************ 00:13:37.604 00:13:37.604 real 0m17.353s 00:13:37.604 user 0m12.690s 00:13:37.604 sys 0m5.831s 00:13:37.604 20:05:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:37.604 20:05:19 -- common/autotest_common.sh@10 -- # set +x 00:13:37.604 20:05:19 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:13:37.604 20:05:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:37.604 20:05:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:37.604 20:05:19 -- common/autotest_common.sh@10 -- # set +x 00:13:37.604 ************************************ 00:13:37.604 START TEST spdk_dd_posix 00:13:37.604 ************************************ 00:13:37.604 20:05:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:13:37.863 * Looking for test storage... 00:13:37.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:13:37.863 20:05:19 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:37.863 20:05:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.863 20:05:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.863 20:05:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.863 20:05:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.863 20:05:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.863 20:05:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.863 20:05:19 -- paths/export.sh@5 -- # export PATH 00:13:37.863 20:05:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.863 20:05:19 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:13:37.863 20:05:19 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:13:37.863 20:05:19 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:13:37.863 20:05:19 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:13:37.863 20:05:19 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:37.863 20:05:19 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:37.864 20:05:19 -- dd/posix.sh@130 -- # tests 00:13:37.864 20:05:19 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:13:37.864 * First test run, liburing in use 00:13:37.864 20:05:19 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:13:37.864 20:05:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:37.864 20:05:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:37.864 20:05:19 -- common/autotest_common.sh@10 -- # set +x 00:13:37.864 ************************************ 00:13:37.864 START TEST dd_flag_append 00:13:37.864 ************************************ 00:13:37.864 20:05:20 -- common/autotest_common.sh@1111 -- # append 00:13:37.864 20:05:20 -- dd/posix.sh@16 -- # local dump0 00:13:37.864 20:05:20 -- dd/posix.sh@17 -- # local dump1 00:13:37.864 20:05:20 -- dd/posix.sh@19 -- # gen_bytes 32 00:13:37.864 20:05:20 -- dd/common.sh@98 -- # xtrace_disable 00:13:37.864 20:05:20 -- common/autotest_common.sh@10 -- # set +x 00:13:37.864 20:05:20 -- dd/posix.sh@19 -- # dump0=ntwqhzit1vgi4n7y45f2wc6umjuojlqs 00:13:37.864 20:05:20 -- dd/posix.sh@20 -- # gen_bytes 32 00:13:37.864 20:05:20 -- dd/common.sh@98 -- # xtrace_disable 00:13:37.864 20:05:20 -- common/autotest_common.sh@10 -- # set +x 00:13:37.864 20:05:20 -- dd/posix.sh@20 -- # dump1=n9qimq7abb2g3394z3d23h892vsdvhg2 00:13:37.864 20:05:20 -- dd/posix.sh@22 -- # printf %s ntwqhzit1vgi4n7y45f2wc6umjuojlqs 00:13:37.864 20:05:20 -- dd/posix.sh@23 -- # printf %s n9qimq7abb2g3394z3d23h892vsdvhg2 00:13:37.864 20:05:20 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:13:37.864 [2024-04-24 20:05:20.058268] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:37.864 [2024-04-24 20:05:20.058335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63043 ] 00:13:38.123 [2024-04-24 20:05:20.196010] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.123 [2024-04-24 20:05:20.296387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.383  Copying: 32/32 [B] (average 31 kBps) 00:13:38.383 00:13:38.383 20:05:20 -- dd/posix.sh@27 -- # [[ n9qimq7abb2g3394z3d23h892vsdvhg2ntwqhzit1vgi4n7y45f2wc6umjuojlqs == \n\9\q\i\m\q\7\a\b\b\2\g\3\3\9\4\z\3\d\2\3\h\8\9\2\v\s\d\v\h\g\2\n\t\w\q\h\z\i\t\1\v\g\i\4\n\7\y\4\5\f\2\w\c\6\u\m\j\u\o\j\l\q\s ]] 00:13:38.383 00:13:38.383 real 0m0.567s 00:13:38.383 user 0m0.345s 00:13:38.383 sys 0m0.223s 00:13:38.383 ************************************ 00:13:38.383 END TEST dd_flag_append 00:13:38.383 ************************************ 00:13:38.383 20:05:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:38.383 20:05:20 -- common/autotest_common.sh@10 -- # set +x 00:13:38.383 20:05:20 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:13:38.383 20:05:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:38.383 20:05:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:38.383 20:05:20 -- common/autotest_common.sh@10 -- # set +x 00:13:38.655 ************************************ 00:13:38.655 START TEST dd_flag_directory 00:13:38.655 ************************************ 00:13:38.655 20:05:20 -- common/autotest_common.sh@1111 -- # directory 00:13:38.655 20:05:20 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:38.655 20:05:20 -- common/autotest_common.sh@638 -- # local es=0 00:13:38.655 20:05:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:38.655 20:05:20 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:38.655 20:05:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:38.655 20:05:20 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:38.655 20:05:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:38.655 20:05:20 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:38.655 20:05:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:38.655 20:05:20 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:38.655 20:05:20 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:38.655 20:05:20 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:38.655 [2024-04-24 20:05:20.788799] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:38.655 [2024-04-24 20:05:20.788944] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63081 ] 00:13:38.915 [2024-04-24 20:05:20.909532] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.915 [2024-04-24 20:05:21.026026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.915 [2024-04-24 20:05:21.113653] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:38.915 [2024-04-24 20:05:21.113796] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:38.915 [2024-04-24 20:05:21.113824] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:39.173 [2024-04-24 20:05:21.207538] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:13:39.173 20:05:21 -- common/autotest_common.sh@641 -- # es=236 00:13:39.173 20:05:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:39.173 20:05:21 -- common/autotest_common.sh@650 -- # es=108 00:13:39.173 20:05:21 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:39.173 20:05:21 -- common/autotest_common.sh@658 -- # es=1 00:13:39.173 20:05:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:39.173 20:05:21 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:13:39.173 20:05:21 -- common/autotest_common.sh@638 -- # local es=0 00:13:39.173 20:05:21 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:13:39.173 20:05:21 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:39.173 20:05:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:39.173 20:05:21 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:39.173 20:05:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:39.173 20:05:21 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:39.173 20:05:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:39.174 20:05:21 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:39.174 20:05:21 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:39.174 20:05:21 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:13:39.174 [2024-04-24 20:05:21.379633] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:39.174 [2024-04-24 20:05:21.379701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63085 ] 00:13:39.433 [2024-04-24 20:05:21.514332] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.433 [2024-04-24 20:05:21.615507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.433 [2024-04-24 20:05:21.683914] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:39.433 [2024-04-24 20:05:21.683959] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:39.433 [2024-04-24 20:05:21.683971] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:39.691 [2024-04-24 20:05:21.777716] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:13:39.691 20:05:21 -- common/autotest_common.sh@641 -- # es=236 00:13:39.691 20:05:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:39.691 20:05:21 -- common/autotest_common.sh@650 -- # es=108 00:13:39.691 20:05:21 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:39.691 20:05:21 -- common/autotest_common.sh@658 -- # es=1 00:13:39.691 20:05:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:39.691 00:13:39.691 real 0m1.169s 00:13:39.691 user 0m0.686s 00:13:39.691 sys 0m0.271s 00:13:39.691 20:05:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:39.691 20:05:21 -- common/autotest_common.sh@10 -- # set +x 00:13:39.691 ************************************ 00:13:39.691 END TEST dd_flag_directory 00:13:39.691 ************************************ 00:13:39.691 20:05:21 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:13:39.691 20:05:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:39.691 20:05:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.691 20:05:21 -- common/autotest_common.sh@10 -- # set +x 00:13:39.951 ************************************ 00:13:39.951 START TEST dd_flag_nofollow 00:13:39.951 ************************************ 00:13:39.951 20:05:22 -- common/autotest_common.sh@1111 -- # nofollow 00:13:39.951 20:05:22 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:13:39.951 20:05:22 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:13:39.951 20:05:22 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:13:39.951 20:05:22 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:13:39.951 20:05:22 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:39.951 20:05:22 -- common/autotest_common.sh@638 -- # local es=0 00:13:39.951 20:05:22 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:39.951 20:05:22 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:39.951 20:05:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:39.951 20:05:22 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:39.951 20:05:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:39.951 20:05:22 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:39.951 20:05:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:39.951 20:05:22 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:39.951 20:05:22 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:39.951 20:05:22 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:39.951 [2024-04-24 20:05:22.103432] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:39.951 [2024-04-24 20:05:22.103494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63124 ] 00:13:40.211 [2024-04-24 20:05:22.242301] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.211 [2024-04-24 20:05:22.343515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.211 [2024-04-24 20:05:22.411058] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:13:40.211 [2024-04-24 20:05:22.411105] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:13:40.211 [2024-04-24 20:05:22.411118] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:40.478 [2024-04-24 20:05:22.502350] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:13:40.478 20:05:22 -- common/autotest_common.sh@641 -- # es=216 00:13:40.478 20:05:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:40.478 20:05:22 -- common/autotest_common.sh@650 -- # es=88 00:13:40.478 20:05:22 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:40.478 20:05:22 -- common/autotest_common.sh@658 -- # es=1 00:13:40.478 20:05:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:40.479 20:05:22 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:13:40.479 20:05:22 -- common/autotest_common.sh@638 -- # local es=0 00:13:40.479 20:05:22 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:13:40.479 20:05:22 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:40.479 20:05:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:40.479 20:05:22 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:40.479 20:05:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:40.479 20:05:22 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:40.479 20:05:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:40.479 20:05:22 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:40.479 20:05:22 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:40.479 20:05:22 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:13:40.479 [2024-04-24 20:05:22.658287] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:40.479 [2024-04-24 20:05:22.658415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63133 ] 00:13:40.743 [2024-04-24 20:05:22.795136] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.743 [2024-04-24 20:05:22.890813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.743 [2024-04-24 20:05:22.959486] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:13:40.743 [2024-04-24 20:05:22.959628] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:13:40.743 [2024-04-24 20:05:22.959666] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:41.001 [2024-04-24 20:05:23.053464] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:13:41.001 20:05:23 -- common/autotest_common.sh@641 -- # es=216 00:13:41.001 20:05:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:41.001 20:05:23 -- common/autotest_common.sh@650 -- # es=88 00:13:41.001 20:05:23 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:41.001 20:05:23 -- common/autotest_common.sh@658 -- # es=1 00:13:41.001 20:05:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:41.001 20:05:23 -- dd/posix.sh@46 -- # gen_bytes 512 00:13:41.001 20:05:23 -- dd/common.sh@98 -- # xtrace_disable 00:13:41.001 20:05:23 -- common/autotest_common.sh@10 -- # set +x 00:13:41.001 20:05:23 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:41.001 [2024-04-24 20:05:23.227861] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:41.001 [2024-04-24 20:05:23.228009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63141 ] 00:13:41.264 [2024-04-24 20:05:23.365978] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.264 [2024-04-24 20:05:23.463050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.524  Copying: 512/512 [B] (average 500 kBps) 00:13:41.524 00:13:41.524 20:05:23 -- dd/posix.sh@49 -- # [[ thorekkjoohxusnd3q75m35af8qyempnj4879tdag0k5mcn0xfiquhl52uog65qcv49wdcghc8ldd9534sa7dd7pffxttb3csczd37kztxvsxu6yk1ano210fpli436c1cwk6qgk6fs05w5u7h5p0lsi1yr01foohyvbmzn03nsulzd43tbxjzc1ez0g7oxvy9f9m857y57s0ubbzd4ylwhp1y1x15pvzx19yxcelchn6vbjycf2r77sujuzladtm9ny1jiioeq65lhm4urubcd711d6xrze6fgxtezcy8u6f7n2kohdt05hhxqa73ks2pf458lgxhqo9jnk81zk4hfhan8otno9y0e3iyvh6yhaztv4e3gjfvja0r4lvnkhr2wc3c0x9tpe1bxca39qinkwv5kacf5pyl8e73vwo9nl6fwqdcrvi09cxtbrn4jbx1j5mi9l3souzhbog3uexb3hvp9bzd25tnx87hfzh0zkz5jvz8695nw6mxjzdw2l == \t\h\o\r\e\k\k\j\o\o\h\x\u\s\n\d\3\q\7\5\m\3\5\a\f\8\q\y\e\m\p\n\j\4\8\7\9\t\d\a\g\0\k\5\m\c\n\0\x\f\i\q\u\h\l\5\2\u\o\g\6\5\q\c\v\4\9\w\d\c\g\h\c\8\l\d\d\9\5\3\4\s\a\7\d\d\7\p\f\f\x\t\t\b\3\c\s\c\z\d\3\7\k\z\t\x\v\s\x\u\6\y\k\1\a\n\o\2\1\0\f\p\l\i\4\3\6\c\1\c\w\k\6\q\g\k\6\f\s\0\5\w\5\u\7\h\5\p\0\l\s\i\1\y\r\0\1\f\o\o\h\y\v\b\m\z\n\0\3\n\s\u\l\z\d\4\3\t\b\x\j\z\c\1\e\z\0\g\7\o\x\v\y\9\f\9\m\8\5\7\y\5\7\s\0\u\b\b\z\d\4\y\l\w\h\p\1\y\1\x\1\5\p\v\z\x\1\9\y\x\c\e\l\c\h\n\6\v\b\j\y\c\f\2\r\7\7\s\u\j\u\z\l\a\d\t\m\9\n\y\1\j\i\i\o\e\q\6\5\l\h\m\4\u\r\u\b\c\d\7\1\1\d\6\x\r\z\e\6\f\g\x\t\e\z\c\y\8\u\6\f\7\n\2\k\o\h\d\t\0\5\h\h\x\q\a\7\3\k\s\2\p\f\4\5\8\l\g\x\h\q\o\9\j\n\k\8\1\z\k\4\h\f\h\a\n\8\o\t\n\o\9\y\0\e\3\i\y\v\h\6\y\h\a\z\t\v\4\e\3\g\j\f\v\j\a\0\r\4\l\v\n\k\h\r\2\w\c\3\c\0\x\9\t\p\e\1\b\x\c\a\3\9\q\i\n\k\w\v\5\k\a\c\f\5\p\y\l\8\e\7\3\v\w\o\9\n\l\6\f\w\q\d\c\r\v\i\0\9\c\x\t\b\r\n\4\j\b\x\1\j\5\m\i\9\l\3\s\o\u\z\h\b\o\g\3\u\e\x\b\3\h\v\p\9\b\z\d\2\5\t\n\x\8\7\h\f\z\h\0\z\k\z\5\j\v\z\8\6\9\5\n\w\6\m\x\j\z\d\w\2\l ]] 00:13:41.524 00:13:41.524 real 0m1.702s 00:13:41.524 user 0m1.030s 00:13:41.524 sys 0m0.466s 00:13:41.524 20:05:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:41.524 ************************************ 00:13:41.524 END TEST dd_flag_nofollow 00:13:41.524 ************************************ 00:13:41.524 20:05:23 -- common/autotest_common.sh@10 -- # set +x 00:13:41.784 20:05:23 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:13:41.784 20:05:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:41.784 20:05:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:41.784 20:05:23 -- common/autotest_common.sh@10 -- # set +x 00:13:41.784 ************************************ 00:13:41.784 START TEST dd_flag_noatime 00:13:41.784 ************************************ 00:13:41.784 20:05:23 -- common/autotest_common.sh@1111 -- # noatime 00:13:41.784 20:05:23 -- dd/posix.sh@53 -- # local atime_if 00:13:41.784 20:05:23 -- dd/posix.sh@54 -- # local atime_of 00:13:41.784 20:05:23 -- dd/posix.sh@58 -- # gen_bytes 512 00:13:41.784 20:05:23 -- dd/common.sh@98 -- # xtrace_disable 00:13:41.784 20:05:23 -- common/autotest_common.sh@10 -- # set +x 00:13:41.784 20:05:23 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:41.784 20:05:23 -- dd/posix.sh@60 -- # atime_if=1713989123 00:13:41.784 20:05:23 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:41.784 20:05:23 -- dd/posix.sh@61 -- # atime_of=1713989123 00:13:41.784 20:05:23 -- dd/posix.sh@66 -- # sleep 1 00:13:42.725 20:05:24 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:42.725 [2024-04-24 20:05:24.965882] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:42.725 [2024-04-24 20:05:24.966037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63193 ] 00:13:42.986 [2024-04-24 20:05:25.104511] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.986 [2024-04-24 20:05:25.215186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.297  Copying: 512/512 [B] (average 500 kBps) 00:13:43.297 00:13:43.297 20:05:25 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:43.297 20:05:25 -- dd/posix.sh@69 -- # (( atime_if == 1713989123 )) 00:13:43.297 20:05:25 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:43.297 20:05:25 -- dd/posix.sh@70 -- # (( atime_of == 1713989123 )) 00:13:43.297 20:05:25 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:43.565 [2024-04-24 20:05:25.600968] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:43.565 [2024-04-24 20:05:25.601031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63201 ] 00:13:43.565 [2024-04-24 20:05:25.736937] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.834 [2024-04-24 20:05:25.836830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.110  Copying: 512/512 [B] (average 500 kBps) 00:13:44.110 00:13:44.110 20:05:26 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:44.110 20:05:26 -- dd/posix.sh@73 -- # (( atime_if < 1713989125 )) 00:13:44.110 00:13:44.110 real 0m2.240s 00:13:44.110 user 0m0.742s 00:13:44.110 sys 0m0.519s 00:13:44.110 20:05:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:44.110 20:05:26 -- common/autotest_common.sh@10 -- # set +x 00:13:44.110 ************************************ 00:13:44.110 END TEST dd_flag_noatime 00:13:44.110 ************************************ 00:13:44.110 20:05:26 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:13:44.110 20:05:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:44.110 20:05:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:44.110 20:05:26 -- common/autotest_common.sh@10 -- # set +x 00:13:44.110 ************************************ 00:13:44.110 START TEST dd_flags_misc 00:13:44.110 ************************************ 00:13:44.110 20:05:26 -- common/autotest_common.sh@1111 -- # io 00:13:44.110 20:05:26 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:13:44.110 20:05:26 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:13:44.110 20:05:26 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:13:44.110 20:05:26 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:13:44.110 20:05:26 -- dd/posix.sh@86 -- # gen_bytes 512 00:13:44.110 20:05:26 -- dd/common.sh@98 -- # xtrace_disable 00:13:44.110 20:05:26 -- common/autotest_common.sh@10 -- # set +x 00:13:44.110 20:05:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:44.110 20:05:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:13:44.110 [2024-04-24 20:05:26.330947] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:44.110 [2024-04-24 20:05:26.331051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63239 ] 00:13:44.383 [2024-04-24 20:05:26.466591] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.383 [2024-04-24 20:05:26.556822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.658  Copying: 512/512 [B] (average 500 kBps) 00:13:44.658 00:13:44.658 20:05:26 -- dd/posix.sh@93 -- # [[ xju30yemgnp57pjukdd8udp7a1g4en1u2d4t1yowihssipu96esbdt93bni02p39yxso7mb7n8u7gzc01786hcy32pooecgqyohaihbljn44vtj2grmf3zpfuwvj3ix6r9y269set15j8do8o1n3n3h0lge1tp4efmxijxojwnmpy20m23hzumxkjz4eaq6rc5e7zstxbtg0m7w0ht9sp00x623gam73bwrdb8jkedaga1octkdqjlx4gp9x3q1qsth43nz3c27mhxc50zpl8sduftjcdmfi4zyzccmk6h1r7vxxi0u2s702zlnunmgvfakmjwfx90h8ab09laxe5ig2hgkt52l7t7mxx17ua8om1hiop4uax39u6yxwrq84g0gt5thatkz6nj9dfuzkgg5dmj7wli4zi69lf530l7e0b3t2j8f8gk1fk91g4f7qay1aub665h84miuvem3bc0rwmx7jakb1hfj2hq462q3gvins35u88pojo6rzzjxf == \x\j\u\3\0\y\e\m\g\n\p\5\7\p\j\u\k\d\d\8\u\d\p\7\a\1\g\4\e\n\1\u\2\d\4\t\1\y\o\w\i\h\s\s\i\p\u\9\6\e\s\b\d\t\9\3\b\n\i\0\2\p\3\9\y\x\s\o\7\m\b\7\n\8\u\7\g\z\c\0\1\7\8\6\h\c\y\3\2\p\o\o\e\c\g\q\y\o\h\a\i\h\b\l\j\n\4\4\v\t\j\2\g\r\m\f\3\z\p\f\u\w\v\j\3\i\x\6\r\9\y\2\6\9\s\e\t\1\5\j\8\d\o\8\o\1\n\3\n\3\h\0\l\g\e\1\t\p\4\e\f\m\x\i\j\x\o\j\w\n\m\p\y\2\0\m\2\3\h\z\u\m\x\k\j\z\4\e\a\q\6\r\c\5\e\7\z\s\t\x\b\t\g\0\m\7\w\0\h\t\9\s\p\0\0\x\6\2\3\g\a\m\7\3\b\w\r\d\b\8\j\k\e\d\a\g\a\1\o\c\t\k\d\q\j\l\x\4\g\p\9\x\3\q\1\q\s\t\h\4\3\n\z\3\c\2\7\m\h\x\c\5\0\z\p\l\8\s\d\u\f\t\j\c\d\m\f\i\4\z\y\z\c\c\m\k\6\h\1\r\7\v\x\x\i\0\u\2\s\7\0\2\z\l\n\u\n\m\g\v\f\a\k\m\j\w\f\x\9\0\h\8\a\b\0\9\l\a\x\e\5\i\g\2\h\g\k\t\5\2\l\7\t\7\m\x\x\1\7\u\a\8\o\m\1\h\i\o\p\4\u\a\x\3\9\u\6\y\x\w\r\q\8\4\g\0\g\t\5\t\h\a\t\k\z\6\n\j\9\d\f\u\z\k\g\g\5\d\m\j\7\w\l\i\4\z\i\6\9\l\f\5\3\0\l\7\e\0\b\3\t\2\j\8\f\8\g\k\1\f\k\9\1\g\4\f\7\q\a\y\1\a\u\b\6\6\5\h\8\4\m\i\u\v\e\m\3\b\c\0\r\w\m\x\7\j\a\k\b\1\h\f\j\2\h\q\4\6\2\q\3\g\v\i\n\s\3\5\u\8\8\p\o\j\o\6\r\z\z\j\x\f ]] 00:13:44.658 20:05:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:44.658 20:05:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:13:44.658 [2024-04-24 20:05:26.888739] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:44.658 [2024-04-24 20:05:26.888796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63252 ] 00:13:44.934 [2024-04-24 20:05:27.026533] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.934 [2024-04-24 20:05:27.120167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.198  Copying: 512/512 [B] (average 500 kBps) 00:13:45.198 00:13:45.198 20:05:27 -- dd/posix.sh@93 -- # [[ xju30yemgnp57pjukdd8udp7a1g4en1u2d4t1yowihssipu96esbdt93bni02p39yxso7mb7n8u7gzc01786hcy32pooecgqyohaihbljn44vtj2grmf3zpfuwvj3ix6r9y269set15j8do8o1n3n3h0lge1tp4efmxijxojwnmpy20m23hzumxkjz4eaq6rc5e7zstxbtg0m7w0ht9sp00x623gam73bwrdb8jkedaga1octkdqjlx4gp9x3q1qsth43nz3c27mhxc50zpl8sduftjcdmfi4zyzccmk6h1r7vxxi0u2s702zlnunmgvfakmjwfx90h8ab09laxe5ig2hgkt52l7t7mxx17ua8om1hiop4uax39u6yxwrq84g0gt5thatkz6nj9dfuzkgg5dmj7wli4zi69lf530l7e0b3t2j8f8gk1fk91g4f7qay1aub665h84miuvem3bc0rwmx7jakb1hfj2hq462q3gvins35u88pojo6rzzjxf == \x\j\u\3\0\y\e\m\g\n\p\5\7\p\j\u\k\d\d\8\u\d\p\7\a\1\g\4\e\n\1\u\2\d\4\t\1\y\o\w\i\h\s\s\i\p\u\9\6\e\s\b\d\t\9\3\b\n\i\0\2\p\3\9\y\x\s\o\7\m\b\7\n\8\u\7\g\z\c\0\1\7\8\6\h\c\y\3\2\p\o\o\e\c\g\q\y\o\h\a\i\h\b\l\j\n\4\4\v\t\j\2\g\r\m\f\3\z\p\f\u\w\v\j\3\i\x\6\r\9\y\2\6\9\s\e\t\1\5\j\8\d\o\8\o\1\n\3\n\3\h\0\l\g\e\1\t\p\4\e\f\m\x\i\j\x\o\j\w\n\m\p\y\2\0\m\2\3\h\z\u\m\x\k\j\z\4\e\a\q\6\r\c\5\e\7\z\s\t\x\b\t\g\0\m\7\w\0\h\t\9\s\p\0\0\x\6\2\3\g\a\m\7\3\b\w\r\d\b\8\j\k\e\d\a\g\a\1\o\c\t\k\d\q\j\l\x\4\g\p\9\x\3\q\1\q\s\t\h\4\3\n\z\3\c\2\7\m\h\x\c\5\0\z\p\l\8\s\d\u\f\t\j\c\d\m\f\i\4\z\y\z\c\c\m\k\6\h\1\r\7\v\x\x\i\0\u\2\s\7\0\2\z\l\n\u\n\m\g\v\f\a\k\m\j\w\f\x\9\0\h\8\a\b\0\9\l\a\x\e\5\i\g\2\h\g\k\t\5\2\l\7\t\7\m\x\x\1\7\u\a\8\o\m\1\h\i\o\p\4\u\a\x\3\9\u\6\y\x\w\r\q\8\4\g\0\g\t\5\t\h\a\t\k\z\6\n\j\9\d\f\u\z\k\g\g\5\d\m\j\7\w\l\i\4\z\i\6\9\l\f\5\3\0\l\7\e\0\b\3\t\2\j\8\f\8\g\k\1\f\k\9\1\g\4\f\7\q\a\y\1\a\u\b\6\6\5\h\8\4\m\i\u\v\e\m\3\b\c\0\r\w\m\x\7\j\a\k\b\1\h\f\j\2\h\q\4\6\2\q\3\g\v\i\n\s\3\5\u\8\8\p\o\j\o\6\r\z\z\j\x\f ]] 00:13:45.198 20:05:27 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:45.198 20:05:27 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:13:45.198 [2024-04-24 20:05:27.446983] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:45.198 [2024-04-24 20:05:27.447044] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63258 ] 00:13:45.458 [2024-04-24 20:05:27.585321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.458 [2024-04-24 20:05:27.677047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.718  Copying: 512/512 [B] (average 83 kBps) 00:13:45.718 00:13:45.718 20:05:27 -- dd/posix.sh@93 -- # [[ xju30yemgnp57pjukdd8udp7a1g4en1u2d4t1yowihssipu96esbdt93bni02p39yxso7mb7n8u7gzc01786hcy32pooecgqyohaihbljn44vtj2grmf3zpfuwvj3ix6r9y269set15j8do8o1n3n3h0lge1tp4efmxijxojwnmpy20m23hzumxkjz4eaq6rc5e7zstxbtg0m7w0ht9sp00x623gam73bwrdb8jkedaga1octkdqjlx4gp9x3q1qsth43nz3c27mhxc50zpl8sduftjcdmfi4zyzccmk6h1r7vxxi0u2s702zlnunmgvfakmjwfx90h8ab09laxe5ig2hgkt52l7t7mxx17ua8om1hiop4uax39u6yxwrq84g0gt5thatkz6nj9dfuzkgg5dmj7wli4zi69lf530l7e0b3t2j8f8gk1fk91g4f7qay1aub665h84miuvem3bc0rwmx7jakb1hfj2hq462q3gvins35u88pojo6rzzjxf == \x\j\u\3\0\y\e\m\g\n\p\5\7\p\j\u\k\d\d\8\u\d\p\7\a\1\g\4\e\n\1\u\2\d\4\t\1\y\o\w\i\h\s\s\i\p\u\9\6\e\s\b\d\t\9\3\b\n\i\0\2\p\3\9\y\x\s\o\7\m\b\7\n\8\u\7\g\z\c\0\1\7\8\6\h\c\y\3\2\p\o\o\e\c\g\q\y\o\h\a\i\h\b\l\j\n\4\4\v\t\j\2\g\r\m\f\3\z\p\f\u\w\v\j\3\i\x\6\r\9\y\2\6\9\s\e\t\1\5\j\8\d\o\8\o\1\n\3\n\3\h\0\l\g\e\1\t\p\4\e\f\m\x\i\j\x\o\j\w\n\m\p\y\2\0\m\2\3\h\z\u\m\x\k\j\z\4\e\a\q\6\r\c\5\e\7\z\s\t\x\b\t\g\0\m\7\w\0\h\t\9\s\p\0\0\x\6\2\3\g\a\m\7\3\b\w\r\d\b\8\j\k\e\d\a\g\a\1\o\c\t\k\d\q\j\l\x\4\g\p\9\x\3\q\1\q\s\t\h\4\3\n\z\3\c\2\7\m\h\x\c\5\0\z\p\l\8\s\d\u\f\t\j\c\d\m\f\i\4\z\y\z\c\c\m\k\6\h\1\r\7\v\x\x\i\0\u\2\s\7\0\2\z\l\n\u\n\m\g\v\f\a\k\m\j\w\f\x\9\0\h\8\a\b\0\9\l\a\x\e\5\i\g\2\h\g\k\t\5\2\l\7\t\7\m\x\x\1\7\u\a\8\o\m\1\h\i\o\p\4\u\a\x\3\9\u\6\y\x\w\r\q\8\4\g\0\g\t\5\t\h\a\t\k\z\6\n\j\9\d\f\u\z\k\g\g\5\d\m\j\7\w\l\i\4\z\i\6\9\l\f\5\3\0\l\7\e\0\b\3\t\2\j\8\f\8\g\k\1\f\k\9\1\g\4\f\7\q\a\y\1\a\u\b\6\6\5\h\8\4\m\i\u\v\e\m\3\b\c\0\r\w\m\x\7\j\a\k\b\1\h\f\j\2\h\q\4\6\2\q\3\g\v\i\n\s\3\5\u\8\8\p\o\j\o\6\r\z\z\j\x\f ]] 00:13:45.718 20:05:27 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:45.718 20:05:27 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:13:45.976 [2024-04-24 20:05:27.983620] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:45.976 [2024-04-24 20:05:27.983674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63273 ] 00:13:45.976 [2024-04-24 20:05:28.110815] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.976 [2024-04-24 20:05:28.197373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.234  Copying: 512/512 [B] (average 250 kBps) 00:13:46.234 00:13:46.234 20:05:28 -- dd/posix.sh@93 -- # [[ xju30yemgnp57pjukdd8udp7a1g4en1u2d4t1yowihssipu96esbdt93bni02p39yxso7mb7n8u7gzc01786hcy32pooecgqyohaihbljn44vtj2grmf3zpfuwvj3ix6r9y269set15j8do8o1n3n3h0lge1tp4efmxijxojwnmpy20m23hzumxkjz4eaq6rc5e7zstxbtg0m7w0ht9sp00x623gam73bwrdb8jkedaga1octkdqjlx4gp9x3q1qsth43nz3c27mhxc50zpl8sduftjcdmfi4zyzccmk6h1r7vxxi0u2s702zlnunmgvfakmjwfx90h8ab09laxe5ig2hgkt52l7t7mxx17ua8om1hiop4uax39u6yxwrq84g0gt5thatkz6nj9dfuzkgg5dmj7wli4zi69lf530l7e0b3t2j8f8gk1fk91g4f7qay1aub665h84miuvem3bc0rwmx7jakb1hfj2hq462q3gvins35u88pojo6rzzjxf == \x\j\u\3\0\y\e\m\g\n\p\5\7\p\j\u\k\d\d\8\u\d\p\7\a\1\g\4\e\n\1\u\2\d\4\t\1\y\o\w\i\h\s\s\i\p\u\9\6\e\s\b\d\t\9\3\b\n\i\0\2\p\3\9\y\x\s\o\7\m\b\7\n\8\u\7\g\z\c\0\1\7\8\6\h\c\y\3\2\p\o\o\e\c\g\q\y\o\h\a\i\h\b\l\j\n\4\4\v\t\j\2\g\r\m\f\3\z\p\f\u\w\v\j\3\i\x\6\r\9\y\2\6\9\s\e\t\1\5\j\8\d\o\8\o\1\n\3\n\3\h\0\l\g\e\1\t\p\4\e\f\m\x\i\j\x\o\j\w\n\m\p\y\2\0\m\2\3\h\z\u\m\x\k\j\z\4\e\a\q\6\r\c\5\e\7\z\s\t\x\b\t\g\0\m\7\w\0\h\t\9\s\p\0\0\x\6\2\3\g\a\m\7\3\b\w\r\d\b\8\j\k\e\d\a\g\a\1\o\c\t\k\d\q\j\l\x\4\g\p\9\x\3\q\1\q\s\t\h\4\3\n\z\3\c\2\7\m\h\x\c\5\0\z\p\l\8\s\d\u\f\t\j\c\d\m\f\i\4\z\y\z\c\c\m\k\6\h\1\r\7\v\x\x\i\0\u\2\s\7\0\2\z\l\n\u\n\m\g\v\f\a\k\m\j\w\f\x\9\0\h\8\a\b\0\9\l\a\x\e\5\i\g\2\h\g\k\t\5\2\l\7\t\7\m\x\x\1\7\u\a\8\o\m\1\h\i\o\p\4\u\a\x\3\9\u\6\y\x\w\r\q\8\4\g\0\g\t\5\t\h\a\t\k\z\6\n\j\9\d\f\u\z\k\g\g\5\d\m\j\7\w\l\i\4\z\i\6\9\l\f\5\3\0\l\7\e\0\b\3\t\2\j\8\f\8\g\k\1\f\k\9\1\g\4\f\7\q\a\y\1\a\u\b\6\6\5\h\8\4\m\i\u\v\e\m\3\b\c\0\r\w\m\x\7\j\a\k\b\1\h\f\j\2\h\q\4\6\2\q\3\g\v\i\n\s\3\5\u\8\8\p\o\j\o\6\r\z\z\j\x\f ]] 00:13:46.234 20:05:28 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:13:46.234 20:05:28 -- dd/posix.sh@86 -- # gen_bytes 512 00:13:46.234 20:05:28 -- dd/common.sh@98 -- # xtrace_disable 00:13:46.234 20:05:28 -- common/autotest_common.sh@10 -- # set +x 00:13:46.234 20:05:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:46.234 20:05:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:13:46.492 [2024-04-24 20:05:28.534259] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:46.492 [2024-04-24 20:05:28.534378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63277 ] 00:13:46.492 [2024-04-24 20:05:28.671271] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.750 [2024-04-24 20:05:28.763339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.009  Copying: 512/512 [B] (average 500 kBps) 00:13:47.009 00:13:47.009 20:05:29 -- dd/posix.sh@93 -- # [[ iq2vu78ipj2m66dy4opp51cklpab6em2nqmj8h9abeeee4dkj2u2z0npoovry6xdlgrgmkxva7d4u79m6yc5nmr0tkqj9tvua73wa5atif5j4abaz3j5gjy8myyc79a6b0sdf7jaie3v0uw5qamlxk5u70eudc1c2fd9fbvqtcgx977y9uzafhscjarikm87nzqkgjpk4tjesdos5yg3e8chcrwtanzf9lfjjmxq4xyp9k6dwyjese46kcfva7sam9iq7twxgds39xx1w8brsi32siteoqj42jwscen48hhdnvyizaaf0xxmipehnyvsvozy2v17wwmf5m6udk4muess95i2nakbjb89f2ougkv4ts9xz8dtyovo0ecmjqdoudq1fjblls8azohq0p33jo18x4ngnxft9euvtdr02m9n3bs8dp1njht4cc86h3dcr1nnm0qzcxwf1x3hordc2m0aprcf1pr7cscttwovxymnha4ln5dfbn25wodcsl3d == \i\q\2\v\u\7\8\i\p\j\2\m\6\6\d\y\4\o\p\p\5\1\c\k\l\p\a\b\6\e\m\2\n\q\m\j\8\h\9\a\b\e\e\e\e\4\d\k\j\2\u\2\z\0\n\p\o\o\v\r\y\6\x\d\l\g\r\g\m\k\x\v\a\7\d\4\u\7\9\m\6\y\c\5\n\m\r\0\t\k\q\j\9\t\v\u\a\7\3\w\a\5\a\t\i\f\5\j\4\a\b\a\z\3\j\5\g\j\y\8\m\y\y\c\7\9\a\6\b\0\s\d\f\7\j\a\i\e\3\v\0\u\w\5\q\a\m\l\x\k\5\u\7\0\e\u\d\c\1\c\2\f\d\9\f\b\v\q\t\c\g\x\9\7\7\y\9\u\z\a\f\h\s\c\j\a\r\i\k\m\8\7\n\z\q\k\g\j\p\k\4\t\j\e\s\d\o\s\5\y\g\3\e\8\c\h\c\r\w\t\a\n\z\f\9\l\f\j\j\m\x\q\4\x\y\p\9\k\6\d\w\y\j\e\s\e\4\6\k\c\f\v\a\7\s\a\m\9\i\q\7\t\w\x\g\d\s\3\9\x\x\1\w\8\b\r\s\i\3\2\s\i\t\e\o\q\j\4\2\j\w\s\c\e\n\4\8\h\h\d\n\v\y\i\z\a\a\f\0\x\x\m\i\p\e\h\n\y\v\s\v\o\z\y\2\v\1\7\w\w\m\f\5\m\6\u\d\k\4\m\u\e\s\s\9\5\i\2\n\a\k\b\j\b\8\9\f\2\o\u\g\k\v\4\t\s\9\x\z\8\d\t\y\o\v\o\0\e\c\m\j\q\d\o\u\d\q\1\f\j\b\l\l\s\8\a\z\o\h\q\0\p\3\3\j\o\1\8\x\4\n\g\n\x\f\t\9\e\u\v\t\d\r\0\2\m\9\n\3\b\s\8\d\p\1\n\j\h\t\4\c\c\8\6\h\3\d\c\r\1\n\n\m\0\q\z\c\x\w\f\1\x\3\h\o\r\d\c\2\m\0\a\p\r\c\f\1\p\r\7\c\s\c\t\t\w\o\v\x\y\m\n\h\a\4\l\n\5\d\f\b\n\2\5\w\o\d\c\s\l\3\d ]] 00:13:47.009 20:05:29 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:47.009 20:05:29 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:13:47.009 [2024-04-24 20:05:29.070437] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:47.009 [2024-04-24 20:05:29.070565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63292 ] 00:13:47.009 [2024-04-24 20:05:29.208225] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.267 [2024-04-24 20:05:29.300258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.561  Copying: 512/512 [B] (average 500 kBps) 00:13:47.561 00:13:47.561 20:05:29 -- dd/posix.sh@93 -- # [[ iq2vu78ipj2m66dy4opp51cklpab6em2nqmj8h9abeeee4dkj2u2z0npoovry6xdlgrgmkxva7d4u79m6yc5nmr0tkqj9tvua73wa5atif5j4abaz3j5gjy8myyc79a6b0sdf7jaie3v0uw5qamlxk5u70eudc1c2fd9fbvqtcgx977y9uzafhscjarikm87nzqkgjpk4tjesdos5yg3e8chcrwtanzf9lfjjmxq4xyp9k6dwyjese46kcfva7sam9iq7twxgds39xx1w8brsi32siteoqj42jwscen48hhdnvyizaaf0xxmipehnyvsvozy2v17wwmf5m6udk4muess95i2nakbjb89f2ougkv4ts9xz8dtyovo0ecmjqdoudq1fjblls8azohq0p33jo18x4ngnxft9euvtdr02m9n3bs8dp1njht4cc86h3dcr1nnm0qzcxwf1x3hordc2m0aprcf1pr7cscttwovxymnha4ln5dfbn25wodcsl3d == \i\q\2\v\u\7\8\i\p\j\2\m\6\6\d\y\4\o\p\p\5\1\c\k\l\p\a\b\6\e\m\2\n\q\m\j\8\h\9\a\b\e\e\e\e\4\d\k\j\2\u\2\z\0\n\p\o\o\v\r\y\6\x\d\l\g\r\g\m\k\x\v\a\7\d\4\u\7\9\m\6\y\c\5\n\m\r\0\t\k\q\j\9\t\v\u\a\7\3\w\a\5\a\t\i\f\5\j\4\a\b\a\z\3\j\5\g\j\y\8\m\y\y\c\7\9\a\6\b\0\s\d\f\7\j\a\i\e\3\v\0\u\w\5\q\a\m\l\x\k\5\u\7\0\e\u\d\c\1\c\2\f\d\9\f\b\v\q\t\c\g\x\9\7\7\y\9\u\z\a\f\h\s\c\j\a\r\i\k\m\8\7\n\z\q\k\g\j\p\k\4\t\j\e\s\d\o\s\5\y\g\3\e\8\c\h\c\r\w\t\a\n\z\f\9\l\f\j\j\m\x\q\4\x\y\p\9\k\6\d\w\y\j\e\s\e\4\6\k\c\f\v\a\7\s\a\m\9\i\q\7\t\w\x\g\d\s\3\9\x\x\1\w\8\b\r\s\i\3\2\s\i\t\e\o\q\j\4\2\j\w\s\c\e\n\4\8\h\h\d\n\v\y\i\z\a\a\f\0\x\x\m\i\p\e\h\n\y\v\s\v\o\z\y\2\v\1\7\w\w\m\f\5\m\6\u\d\k\4\m\u\e\s\s\9\5\i\2\n\a\k\b\j\b\8\9\f\2\o\u\g\k\v\4\t\s\9\x\z\8\d\t\y\o\v\o\0\e\c\m\j\q\d\o\u\d\q\1\f\j\b\l\l\s\8\a\z\o\h\q\0\p\3\3\j\o\1\8\x\4\n\g\n\x\f\t\9\e\u\v\t\d\r\0\2\m\9\n\3\b\s\8\d\p\1\n\j\h\t\4\c\c\8\6\h\3\d\c\r\1\n\n\m\0\q\z\c\x\w\f\1\x\3\h\o\r\d\c\2\m\0\a\p\r\c\f\1\p\r\7\c\s\c\t\t\w\o\v\x\y\m\n\h\a\4\l\n\5\d\f\b\n\2\5\w\o\d\c\s\l\3\d ]] 00:13:47.561 20:05:29 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:47.561 20:05:29 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:13:47.561 [2024-04-24 20:05:29.621724] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:47.561 [2024-04-24 20:05:29.621782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63296 ] 00:13:47.561 [2024-04-24 20:05:29.757630] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.830 [2024-04-24 20:05:29.849363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.089  Copying: 512/512 [B] (average 166 kBps) 00:13:48.089 00:13:48.089 20:05:30 -- dd/posix.sh@93 -- # [[ iq2vu78ipj2m66dy4opp51cklpab6em2nqmj8h9abeeee4dkj2u2z0npoovry6xdlgrgmkxva7d4u79m6yc5nmr0tkqj9tvua73wa5atif5j4abaz3j5gjy8myyc79a6b0sdf7jaie3v0uw5qamlxk5u70eudc1c2fd9fbvqtcgx977y9uzafhscjarikm87nzqkgjpk4tjesdos5yg3e8chcrwtanzf9lfjjmxq4xyp9k6dwyjese46kcfva7sam9iq7twxgds39xx1w8brsi32siteoqj42jwscen48hhdnvyizaaf0xxmipehnyvsvozy2v17wwmf5m6udk4muess95i2nakbjb89f2ougkv4ts9xz8dtyovo0ecmjqdoudq1fjblls8azohq0p33jo18x4ngnxft9euvtdr02m9n3bs8dp1njht4cc86h3dcr1nnm0qzcxwf1x3hordc2m0aprcf1pr7cscttwovxymnha4ln5dfbn25wodcsl3d == \i\q\2\v\u\7\8\i\p\j\2\m\6\6\d\y\4\o\p\p\5\1\c\k\l\p\a\b\6\e\m\2\n\q\m\j\8\h\9\a\b\e\e\e\e\4\d\k\j\2\u\2\z\0\n\p\o\o\v\r\y\6\x\d\l\g\r\g\m\k\x\v\a\7\d\4\u\7\9\m\6\y\c\5\n\m\r\0\t\k\q\j\9\t\v\u\a\7\3\w\a\5\a\t\i\f\5\j\4\a\b\a\z\3\j\5\g\j\y\8\m\y\y\c\7\9\a\6\b\0\s\d\f\7\j\a\i\e\3\v\0\u\w\5\q\a\m\l\x\k\5\u\7\0\e\u\d\c\1\c\2\f\d\9\f\b\v\q\t\c\g\x\9\7\7\y\9\u\z\a\f\h\s\c\j\a\r\i\k\m\8\7\n\z\q\k\g\j\p\k\4\t\j\e\s\d\o\s\5\y\g\3\e\8\c\h\c\r\w\t\a\n\z\f\9\l\f\j\j\m\x\q\4\x\y\p\9\k\6\d\w\y\j\e\s\e\4\6\k\c\f\v\a\7\s\a\m\9\i\q\7\t\w\x\g\d\s\3\9\x\x\1\w\8\b\r\s\i\3\2\s\i\t\e\o\q\j\4\2\j\w\s\c\e\n\4\8\h\h\d\n\v\y\i\z\a\a\f\0\x\x\m\i\p\e\h\n\y\v\s\v\o\z\y\2\v\1\7\w\w\m\f\5\m\6\u\d\k\4\m\u\e\s\s\9\5\i\2\n\a\k\b\j\b\8\9\f\2\o\u\g\k\v\4\t\s\9\x\z\8\d\t\y\o\v\o\0\e\c\m\j\q\d\o\u\d\q\1\f\j\b\l\l\s\8\a\z\o\h\q\0\p\3\3\j\o\1\8\x\4\n\g\n\x\f\t\9\e\u\v\t\d\r\0\2\m\9\n\3\b\s\8\d\p\1\n\j\h\t\4\c\c\8\6\h\3\d\c\r\1\n\n\m\0\q\z\c\x\w\f\1\x\3\h\o\r\d\c\2\m\0\a\p\r\c\f\1\p\r\7\c\s\c\t\t\w\o\v\x\y\m\n\h\a\4\l\n\5\d\f\b\n\2\5\w\o\d\c\s\l\3\d ]] 00:13:48.089 20:05:30 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:48.089 20:05:30 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:13:48.089 [2024-04-24 20:05:30.157828] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:48.089 [2024-04-24 20:05:30.157878] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63311 ] 00:13:48.089 [2024-04-24 20:05:30.292184] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.348 [2024-04-24 20:05:30.372343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.607  Copying: 512/512 [B] (average 250 kBps) 00:13:48.607 00:13:48.607 20:05:30 -- dd/posix.sh@93 -- # [[ iq2vu78ipj2m66dy4opp51cklpab6em2nqmj8h9abeeee4dkj2u2z0npoovry6xdlgrgmkxva7d4u79m6yc5nmr0tkqj9tvua73wa5atif5j4abaz3j5gjy8myyc79a6b0sdf7jaie3v0uw5qamlxk5u70eudc1c2fd9fbvqtcgx977y9uzafhscjarikm87nzqkgjpk4tjesdos5yg3e8chcrwtanzf9lfjjmxq4xyp9k6dwyjese46kcfva7sam9iq7twxgds39xx1w8brsi32siteoqj42jwscen48hhdnvyizaaf0xxmipehnyvsvozy2v17wwmf5m6udk4muess95i2nakbjb89f2ougkv4ts9xz8dtyovo0ecmjqdoudq1fjblls8azohq0p33jo18x4ngnxft9euvtdr02m9n3bs8dp1njht4cc86h3dcr1nnm0qzcxwf1x3hordc2m0aprcf1pr7cscttwovxymnha4ln5dfbn25wodcsl3d == \i\q\2\v\u\7\8\i\p\j\2\m\6\6\d\y\4\o\p\p\5\1\c\k\l\p\a\b\6\e\m\2\n\q\m\j\8\h\9\a\b\e\e\e\e\4\d\k\j\2\u\2\z\0\n\p\o\o\v\r\y\6\x\d\l\g\r\g\m\k\x\v\a\7\d\4\u\7\9\m\6\y\c\5\n\m\r\0\t\k\q\j\9\t\v\u\a\7\3\w\a\5\a\t\i\f\5\j\4\a\b\a\z\3\j\5\g\j\y\8\m\y\y\c\7\9\a\6\b\0\s\d\f\7\j\a\i\e\3\v\0\u\w\5\q\a\m\l\x\k\5\u\7\0\e\u\d\c\1\c\2\f\d\9\f\b\v\q\t\c\g\x\9\7\7\y\9\u\z\a\f\h\s\c\j\a\r\i\k\m\8\7\n\z\q\k\g\j\p\k\4\t\j\e\s\d\o\s\5\y\g\3\e\8\c\h\c\r\w\t\a\n\z\f\9\l\f\j\j\m\x\q\4\x\y\p\9\k\6\d\w\y\j\e\s\e\4\6\k\c\f\v\a\7\s\a\m\9\i\q\7\t\w\x\g\d\s\3\9\x\x\1\w\8\b\r\s\i\3\2\s\i\t\e\o\q\j\4\2\j\w\s\c\e\n\4\8\h\h\d\n\v\y\i\z\a\a\f\0\x\x\m\i\p\e\h\n\y\v\s\v\o\z\y\2\v\1\7\w\w\m\f\5\m\6\u\d\k\4\m\u\e\s\s\9\5\i\2\n\a\k\b\j\b\8\9\f\2\o\u\g\k\v\4\t\s\9\x\z\8\d\t\y\o\v\o\0\e\c\m\j\q\d\o\u\d\q\1\f\j\b\l\l\s\8\a\z\o\h\q\0\p\3\3\j\o\1\8\x\4\n\g\n\x\f\t\9\e\u\v\t\d\r\0\2\m\9\n\3\b\s\8\d\p\1\n\j\h\t\4\c\c\8\6\h\3\d\c\r\1\n\n\m\0\q\z\c\x\w\f\1\x\3\h\o\r\d\c\2\m\0\a\p\r\c\f\1\p\r\7\c\s\c\t\t\w\o\v\x\y\m\n\h\a\4\l\n\5\d\f\b\n\2\5\w\o\d\c\s\l\3\d ]] 00:13:48.607 00:13:48.607 real 0m4.380s 00:13:48.607 user 0m2.622s 00:13:48.607 sys 0m1.772s 00:13:48.607 20:05:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:48.607 20:05:30 -- common/autotest_common.sh@10 -- # set +x 00:13:48.607 ************************************ 00:13:48.607 END TEST dd_flags_misc 00:13:48.607 ************************************ 00:13:48.607 20:05:30 -- dd/posix.sh@131 -- # tests_forced_aio 00:13:48.607 20:05:30 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:13:48.607 * Second test run, disabling liburing, forcing AIO 00:13:48.607 20:05:30 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:13:48.607 20:05:30 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:13:48.607 20:05:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:48.607 20:05:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:48.607 20:05:30 -- common/autotest_common.sh@10 -- # set +x 00:13:48.607 ************************************ 00:13:48.607 START TEST dd_flag_append_forced_aio 00:13:48.607 ************************************ 00:13:48.607 20:05:30 -- common/autotest_common.sh@1111 -- # append 00:13:48.607 20:05:30 -- dd/posix.sh@16 -- # local dump0 00:13:48.608 20:05:30 -- dd/posix.sh@17 -- # local dump1 00:13:48.608 20:05:30 -- dd/posix.sh@19 -- # gen_bytes 32 00:13:48.608 20:05:30 -- dd/common.sh@98 -- # xtrace_disable 00:13:48.608 20:05:30 -- common/autotest_common.sh@10 -- # set +x 00:13:48.608 20:05:30 -- dd/posix.sh@19 -- # dump0=j37u9rzzj7rjwqrvfxedl4vlit8z7cvm 00:13:48.608 20:05:30 -- dd/posix.sh@20 -- # gen_bytes 32 00:13:48.608 20:05:30 -- dd/common.sh@98 -- # xtrace_disable 00:13:48.608 20:05:30 -- common/autotest_common.sh@10 -- # set +x 00:13:48.608 20:05:30 -- dd/posix.sh@20 -- # dump1=8vacb0cn2rd4osq3h7quyaorsbg43qoh 00:13:48.608 20:05:30 -- dd/posix.sh@22 -- # printf %s j37u9rzzj7rjwqrvfxedl4vlit8z7cvm 00:13:48.608 20:05:30 -- dd/posix.sh@23 -- # printf %s 8vacb0cn2rd4osq3h7quyaorsbg43qoh 00:13:48.608 20:05:30 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:13:48.608 [2024-04-24 20:05:30.848007] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:48.608 [2024-04-24 20:05:30.848147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63338 ] 00:13:48.867 [2024-04-24 20:05:30.984750] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.867 [2024-04-24 20:05:31.088310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.391  Copying: 32/32 [B] (average 31 kBps) 00:13:49.391 00:13:49.391 20:05:31 -- dd/posix.sh@27 -- # [[ 8vacb0cn2rd4osq3h7quyaorsbg43qohj37u9rzzj7rjwqrvfxedl4vlit8z7cvm == \8\v\a\c\b\0\c\n\2\r\d\4\o\s\q\3\h\7\q\u\y\a\o\r\s\b\g\4\3\q\o\h\j\3\7\u\9\r\z\z\j\7\r\j\w\q\r\v\f\x\e\d\l\4\v\l\i\t\8\z\7\c\v\m ]] 00:13:49.391 00:13:49.391 real 0m0.604s 00:13:49.391 user 0m0.349s 00:13:49.391 sys 0m0.131s 00:13:49.391 20:05:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:49.391 ************************************ 00:13:49.391 20:05:31 -- common/autotest_common.sh@10 -- # set +x 00:13:49.391 END TEST dd_flag_append_forced_aio 00:13:49.391 ************************************ 00:13:49.391 20:05:31 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:13:49.391 20:05:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:49.391 20:05:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:49.391 20:05:31 -- common/autotest_common.sh@10 -- # set +x 00:13:49.392 ************************************ 00:13:49.392 START TEST dd_flag_directory_forced_aio 00:13:49.392 ************************************ 00:13:49.392 20:05:31 -- common/autotest_common.sh@1111 -- # directory 00:13:49.392 20:05:31 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:49.392 20:05:31 -- common/autotest_common.sh@638 -- # local es=0 00:13:49.392 20:05:31 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:49.392 20:05:31 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:49.392 20:05:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:49.392 20:05:31 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:49.392 20:05:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:49.392 20:05:31 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:49.392 20:05:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:49.392 20:05:31 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:49.392 20:05:31 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:49.392 20:05:31 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:49.392 [2024-04-24 20:05:31.585903] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:49.392 [2024-04-24 20:05:31.585964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63375 ] 00:13:49.651 [2024-04-24 20:05:31.721062] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.651 [2024-04-24 20:05:31.818089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.651 [2024-04-24 20:05:31.889428] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:49.651 [2024-04-24 20:05:31.889473] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:49.651 [2024-04-24 20:05:31.889485] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:49.911 [2024-04-24 20:05:31.983235] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:13:49.911 20:05:32 -- common/autotest_common.sh@641 -- # es=236 00:13:49.911 20:05:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:49.911 20:05:32 -- common/autotest_common.sh@650 -- # es=108 00:13:49.911 20:05:32 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:49.911 20:05:32 -- common/autotest_common.sh@658 -- # es=1 00:13:49.911 20:05:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:49.911 20:05:32 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:13:49.911 20:05:32 -- common/autotest_common.sh@638 -- # local es=0 00:13:49.911 20:05:32 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:13:49.911 20:05:32 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:49.911 20:05:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:49.911 20:05:32 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:49.911 20:05:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:49.911 20:05:32 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:49.911 20:05:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:49.911 20:05:32 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:49.911 20:05:32 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:49.911 20:05:32 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:13:49.911 [2024-04-24 20:05:32.154487] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:49.911 [2024-04-24 20:05:32.154634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63385 ] 00:13:50.170 [2024-04-24 20:05:32.293098] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.170 [2024-04-24 20:05:32.388362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.429 [2024-04-24 20:05:32.457489] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:50.429 [2024-04-24 20:05:32.457666] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:50.429 [2024-04-24 20:05:32.457716] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:50.429 [2024-04-24 20:05:32.553390] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:13:50.429 20:05:32 -- common/autotest_common.sh@641 -- # es=236 00:13:50.429 20:05:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:50.429 20:05:32 -- common/autotest_common.sh@650 -- # es=108 00:13:50.429 20:05:32 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:50.429 20:05:32 -- common/autotest_common.sh@658 -- # es=1 00:13:50.429 ************************************ 00:13:50.429 END TEST dd_flag_directory_forced_aio 00:13:50.429 ************************************ 00:13:50.429 20:05:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:50.429 00:13:50.429 real 0m1.141s 00:13:50.429 user 0m0.688s 00:13:50.429 sys 0m0.242s 00:13:50.429 20:05:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:50.429 20:05:32 -- common/autotest_common.sh@10 -- # set +x 00:13:50.689 20:05:32 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:13:50.689 20:05:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:50.689 20:05:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:50.689 20:05:32 -- common/autotest_common.sh@10 -- # set +x 00:13:50.689 ************************************ 00:13:50.689 START TEST dd_flag_nofollow_forced_aio 00:13:50.689 ************************************ 00:13:50.689 20:05:32 -- common/autotest_common.sh@1111 -- # nofollow 00:13:50.689 20:05:32 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:13:50.689 20:05:32 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:13:50.689 20:05:32 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:13:50.689 20:05:32 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:13:50.689 20:05:32 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:50.689 20:05:32 -- common/autotest_common.sh@638 -- # local es=0 00:13:50.689 20:05:32 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:50.689 20:05:32 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:50.689 20:05:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:50.689 20:05:32 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:50.689 20:05:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:50.689 20:05:32 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:50.689 20:05:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:50.689 20:05:32 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:50.689 20:05:32 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:50.689 20:05:32 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:50.689 [2024-04-24 20:05:32.874358] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:50.689 [2024-04-24 20:05:32.874491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63418 ] 00:13:50.948 [2024-04-24 20:05:33.015598] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.948 [2024-04-24 20:05:33.101343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.948 [2024-04-24 20:05:33.168962] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:13:50.948 [2024-04-24 20:05:33.169099] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:13:50.948 [2024-04-24 20:05:33.169138] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:51.207 [2024-04-24 20:05:33.262066] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:13:51.207 20:05:33 -- common/autotest_common.sh@641 -- # es=216 00:13:51.207 20:05:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:51.207 20:05:33 -- common/autotest_common.sh@650 -- # es=88 00:13:51.207 20:05:33 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:51.207 20:05:33 -- common/autotest_common.sh@658 -- # es=1 00:13:51.207 20:05:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:51.207 20:05:33 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:13:51.207 20:05:33 -- common/autotest_common.sh@638 -- # local es=0 00:13:51.207 20:05:33 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:13:51.207 20:05:33 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:51.207 20:05:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:51.207 20:05:33 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:51.207 20:05:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:51.207 20:05:33 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:51.207 20:05:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:51.207 20:05:33 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:51.207 20:05:33 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:51.207 20:05:33 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:13:51.207 [2024-04-24 20:05:33.429998] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:51.207 [2024-04-24 20:05:33.430058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63433 ] 00:13:51.466 [2024-04-24 20:05:33.566629] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.466 [2024-04-24 20:05:33.656096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.724 [2024-04-24 20:05:33.723906] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:13:51.724 [2024-04-24 20:05:33.723950] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:13:51.724 [2024-04-24 20:05:33.723963] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:51.724 [2024-04-24 20:05:33.814963] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:13:51.724 20:05:33 -- common/autotest_common.sh@641 -- # es=216 00:13:51.724 20:05:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:51.724 20:05:33 -- common/autotest_common.sh@650 -- # es=88 00:13:51.724 20:05:33 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:51.724 20:05:33 -- common/autotest_common.sh@658 -- # es=1 00:13:51.724 20:05:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:51.724 20:05:33 -- dd/posix.sh@46 -- # gen_bytes 512 00:13:51.724 20:05:33 -- dd/common.sh@98 -- # xtrace_disable 00:13:51.724 20:05:33 -- common/autotest_common.sh@10 -- # set +x 00:13:51.724 20:05:33 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:51.983 [2024-04-24 20:05:33.991610] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:51.983 [2024-04-24 20:05:33.991672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63435 ] 00:13:51.983 [2024-04-24 20:05:34.128220] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.983 [2024-04-24 20:05:34.216489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.503  Copying: 512/512 [B] (average 500 kBps) 00:13:52.503 00:13:52.503 20:05:34 -- dd/posix.sh@49 -- # [[ st7palmxtk2v7r4odkoucio045wkyquapk209xybkssg45qp2mlbluq5n4xry2ojups83197lp9qstgkbyc5lq8y150ur6jr06y4296onryulpqslt372t8fxle10fj01jh6q8ewcc9kn5x99rqqglngh5g8kwmkjj0hp5ai4s4hncg0izsvr21y36hky3iy407gp2j3g7rcwneyqm5z86vtx4gncx09wm8cpme7j6fd3j2ss1djs51mtyk2u79mlclulhi77x2g5ggaettyuyl0k6ptfh2up9ttlsgjgvqm66js8eh1o2xx0k1q1wqkfcw4dl25pez04uacny3j0ul21iuwd4orgf9w2ovi5h81bhmrtlpsqtzv93ajld1foxteop6cnkdby9q28kzu3zv84qqurnn3n840dcgdvqsxp3l4yprh9wqlnfpwljndfpwf3tiuua71dg39t73w9mp2mpmmun1x5a38cc8i3m54yklh1cglos831fdci77k == \s\t\7\p\a\l\m\x\t\k\2\v\7\r\4\o\d\k\o\u\c\i\o\0\4\5\w\k\y\q\u\a\p\k\2\0\9\x\y\b\k\s\s\g\4\5\q\p\2\m\l\b\l\u\q\5\n\4\x\r\y\2\o\j\u\p\s\8\3\1\9\7\l\p\9\q\s\t\g\k\b\y\c\5\l\q\8\y\1\5\0\u\r\6\j\r\0\6\y\4\2\9\6\o\n\r\y\u\l\p\q\s\l\t\3\7\2\t\8\f\x\l\e\1\0\f\j\0\1\j\h\6\q\8\e\w\c\c\9\k\n\5\x\9\9\r\q\q\g\l\n\g\h\5\g\8\k\w\m\k\j\j\0\h\p\5\a\i\4\s\4\h\n\c\g\0\i\z\s\v\r\2\1\y\3\6\h\k\y\3\i\y\4\0\7\g\p\2\j\3\g\7\r\c\w\n\e\y\q\m\5\z\8\6\v\t\x\4\g\n\c\x\0\9\w\m\8\c\p\m\e\7\j\6\f\d\3\j\2\s\s\1\d\j\s\5\1\m\t\y\k\2\u\7\9\m\l\c\l\u\l\h\i\7\7\x\2\g\5\g\g\a\e\t\t\y\u\y\l\0\k\6\p\t\f\h\2\u\p\9\t\t\l\s\g\j\g\v\q\m\6\6\j\s\8\e\h\1\o\2\x\x\0\k\1\q\1\w\q\k\f\c\w\4\d\l\2\5\p\e\z\0\4\u\a\c\n\y\3\j\0\u\l\2\1\i\u\w\d\4\o\r\g\f\9\w\2\o\v\i\5\h\8\1\b\h\m\r\t\l\p\s\q\t\z\v\9\3\a\j\l\d\1\f\o\x\t\e\o\p\6\c\n\k\d\b\y\9\q\2\8\k\z\u\3\z\v\8\4\q\q\u\r\n\n\3\n\8\4\0\d\c\g\d\v\q\s\x\p\3\l\4\y\p\r\h\9\w\q\l\n\f\p\w\l\j\n\d\f\p\w\f\3\t\i\u\u\a\7\1\d\g\3\9\t\7\3\w\9\m\p\2\m\p\m\m\u\n\1\x\5\a\3\8\c\c\8\i\3\m\5\4\y\k\l\h\1\c\g\l\o\s\8\3\1\f\d\c\i\7\7\k ]] 00:13:52.503 00:13:52.503 real 0m1.707s 00:13:52.503 user 0m1.003s 00:13:52.503 sys 0m0.364s 00:13:52.503 20:05:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:52.503 20:05:34 -- common/autotest_common.sh@10 -- # set +x 00:13:52.503 ************************************ 00:13:52.503 END TEST dd_flag_nofollow_forced_aio 00:13:52.503 ************************************ 00:13:52.503 20:05:34 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:13:52.503 20:05:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:52.503 20:05:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:52.503 20:05:34 -- common/autotest_common.sh@10 -- # set +x 00:13:52.503 ************************************ 00:13:52.503 START TEST dd_flag_noatime_forced_aio 00:13:52.503 ************************************ 00:13:52.503 20:05:34 -- common/autotest_common.sh@1111 -- # noatime 00:13:52.503 20:05:34 -- dd/posix.sh@53 -- # local atime_if 00:13:52.503 20:05:34 -- dd/posix.sh@54 -- # local atime_of 00:13:52.503 20:05:34 -- dd/posix.sh@58 -- # gen_bytes 512 00:13:52.503 20:05:34 -- dd/common.sh@98 -- # xtrace_disable 00:13:52.503 20:05:34 -- common/autotest_common.sh@10 -- # set +x 00:13:52.503 20:05:34 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:52.503 20:05:34 -- dd/posix.sh@60 -- # atime_if=1713989134 00:13:52.503 20:05:34 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:52.503 20:05:34 -- dd/posix.sh@61 -- # atime_of=1713989134 00:13:52.503 20:05:34 -- dd/posix.sh@66 -- # sleep 1 00:13:53.883 20:05:35 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:53.883 [2024-04-24 20:05:35.749362] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:53.883 [2024-04-24 20:05:35.749561] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63485 ] 00:13:53.883 [2024-04-24 20:05:35.888561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.883 [2024-04-24 20:05:35.990959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.142  Copying: 512/512 [B] (average 500 kBps) 00:13:54.142 00:13:54.142 20:05:36 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:54.142 20:05:36 -- dd/posix.sh@69 -- # (( atime_if == 1713989134 )) 00:13:54.142 20:05:36 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:54.142 20:05:36 -- dd/posix.sh@70 -- # (( atime_of == 1713989134 )) 00:13:54.142 20:05:36 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:54.142 [2024-04-24 20:05:36.333927] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:54.142 [2024-04-24 20:05:36.333980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63497 ] 00:13:54.402 [2024-04-24 20:05:36.469763] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.402 [2024-04-24 20:05:36.571355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.757  Copying: 512/512 [B] (average 500 kBps) 00:13:54.757 00:13:54.757 20:05:36 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:54.757 20:05:36 -- dd/posix.sh@73 -- # (( atime_if < 1713989136 )) 00:13:54.757 00:13:54.757 real 0m2.209s 00:13:54.757 user 0m0.701s 00:13:54.757 sys 0m0.268s 00:13:54.757 ************************************ 00:13:54.757 END TEST dd_flag_noatime_forced_aio 00:13:54.757 ************************************ 00:13:54.757 20:05:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:54.757 20:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:54.757 20:05:36 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:13:54.757 20:05:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:54.757 20:05:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:54.757 20:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:55.031 ************************************ 00:13:55.031 START TEST dd_flags_misc_forced_aio 00:13:55.031 ************************************ 00:13:55.031 20:05:37 -- common/autotest_common.sh@1111 -- # io 00:13:55.031 20:05:37 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:13:55.031 20:05:37 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:13:55.031 20:05:37 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:13:55.031 20:05:37 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:13:55.031 20:05:37 -- dd/posix.sh@86 -- # gen_bytes 512 00:13:55.031 20:05:37 -- dd/common.sh@98 -- # xtrace_disable 00:13:55.031 20:05:37 -- common/autotest_common.sh@10 -- # set +x 00:13:55.031 20:05:37 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:55.031 20:05:37 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:13:55.031 [2024-04-24 20:05:37.077234] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:55.031 [2024-04-24 20:05:37.077411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63527 ] 00:13:55.031 [2024-04-24 20:05:37.217206] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.290 [2024-04-24 20:05:37.313228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.549  Copying: 512/512 [B] (average 500 kBps) 00:13:55.549 00:13:55.549 20:05:37 -- dd/posix.sh@93 -- # [[ glncujntrzq6lf8w0q67j2x1gi5px7i5aclq0tynxuezkli590ul5e45a3woy4ag25m7hx0nk5o955lsbr2fsld514e8xgy8ewpab37jq4vy5mqog6h591w08ttcfj9wot2241vdqf4f02ijwlcpogl8cu33kwny58zwqpxbw440rbso71kyj92rnv6e9ra3y3j0j5byx1efb2oy3qj809b4o8jxp51xw6p8awfwkaj5k9e305iuryy05da70e519xmeu9lvj2b0i3xven9bgash3ocn77n35c42q9s2gp5fwn94ewrxj4ga8jcex3kolbs4rhrk144lavihysnfeftn51pbbgoru6wxi7y64hdkib6515fu02371d98bpb9y90lxf59afr6a1a680tk29c369jgmo4bg43hcw3n39g1sogl73t0kmun7cg1bwl98ryyupiy7eodxgsfguqp6b9wux89mvo3h1cq75sasvh788lse51kpd2xjlgyt4gv == \g\l\n\c\u\j\n\t\r\z\q\6\l\f\8\w\0\q\6\7\j\2\x\1\g\i\5\p\x\7\i\5\a\c\l\q\0\t\y\n\x\u\e\z\k\l\i\5\9\0\u\l\5\e\4\5\a\3\w\o\y\4\a\g\2\5\m\7\h\x\0\n\k\5\o\9\5\5\l\s\b\r\2\f\s\l\d\5\1\4\e\8\x\g\y\8\e\w\p\a\b\3\7\j\q\4\v\y\5\m\q\o\g\6\h\5\9\1\w\0\8\t\t\c\f\j\9\w\o\t\2\2\4\1\v\d\q\f\4\f\0\2\i\j\w\l\c\p\o\g\l\8\c\u\3\3\k\w\n\y\5\8\z\w\q\p\x\b\w\4\4\0\r\b\s\o\7\1\k\y\j\9\2\r\n\v\6\e\9\r\a\3\y\3\j\0\j\5\b\y\x\1\e\f\b\2\o\y\3\q\j\8\0\9\b\4\o\8\j\x\p\5\1\x\w\6\p\8\a\w\f\w\k\a\j\5\k\9\e\3\0\5\i\u\r\y\y\0\5\d\a\7\0\e\5\1\9\x\m\e\u\9\l\v\j\2\b\0\i\3\x\v\e\n\9\b\g\a\s\h\3\o\c\n\7\7\n\3\5\c\4\2\q\9\s\2\g\p\5\f\w\n\9\4\e\w\r\x\j\4\g\a\8\j\c\e\x\3\k\o\l\b\s\4\r\h\r\k\1\4\4\l\a\v\i\h\y\s\n\f\e\f\t\n\5\1\p\b\b\g\o\r\u\6\w\x\i\7\y\6\4\h\d\k\i\b\6\5\1\5\f\u\0\2\3\7\1\d\9\8\b\p\b\9\y\9\0\l\x\f\5\9\a\f\r\6\a\1\a\6\8\0\t\k\2\9\c\3\6\9\j\g\m\o\4\b\g\4\3\h\c\w\3\n\3\9\g\1\s\o\g\l\7\3\t\0\k\m\u\n\7\c\g\1\b\w\l\9\8\r\y\y\u\p\i\y\7\e\o\d\x\g\s\f\g\u\q\p\6\b\9\w\u\x\8\9\m\v\o\3\h\1\c\q\7\5\s\a\s\v\h\7\8\8\l\s\e\5\1\k\p\d\2\x\j\l\g\y\t\4\g\v ]] 00:13:55.550 20:05:37 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:55.550 20:05:37 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:13:55.550 [2024-04-24 20:05:37.650703] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:55.550 [2024-04-24 20:05:37.650769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63540 ] 00:13:55.550 [2024-04-24 20:05:37.785712] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.809 [2024-04-24 20:05:37.884498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.068  Copying: 512/512 [B] (average 500 kBps) 00:13:56.068 00:13:56.068 20:05:38 -- dd/posix.sh@93 -- # [[ glncujntrzq6lf8w0q67j2x1gi5px7i5aclq0tynxuezkli590ul5e45a3woy4ag25m7hx0nk5o955lsbr2fsld514e8xgy8ewpab37jq4vy5mqog6h591w08ttcfj9wot2241vdqf4f02ijwlcpogl8cu33kwny58zwqpxbw440rbso71kyj92rnv6e9ra3y3j0j5byx1efb2oy3qj809b4o8jxp51xw6p8awfwkaj5k9e305iuryy05da70e519xmeu9lvj2b0i3xven9bgash3ocn77n35c42q9s2gp5fwn94ewrxj4ga8jcex3kolbs4rhrk144lavihysnfeftn51pbbgoru6wxi7y64hdkib6515fu02371d98bpb9y90lxf59afr6a1a680tk29c369jgmo4bg43hcw3n39g1sogl73t0kmun7cg1bwl98ryyupiy7eodxgsfguqp6b9wux89mvo3h1cq75sasvh788lse51kpd2xjlgyt4gv == \g\l\n\c\u\j\n\t\r\z\q\6\l\f\8\w\0\q\6\7\j\2\x\1\g\i\5\p\x\7\i\5\a\c\l\q\0\t\y\n\x\u\e\z\k\l\i\5\9\0\u\l\5\e\4\5\a\3\w\o\y\4\a\g\2\5\m\7\h\x\0\n\k\5\o\9\5\5\l\s\b\r\2\f\s\l\d\5\1\4\e\8\x\g\y\8\e\w\p\a\b\3\7\j\q\4\v\y\5\m\q\o\g\6\h\5\9\1\w\0\8\t\t\c\f\j\9\w\o\t\2\2\4\1\v\d\q\f\4\f\0\2\i\j\w\l\c\p\o\g\l\8\c\u\3\3\k\w\n\y\5\8\z\w\q\p\x\b\w\4\4\0\r\b\s\o\7\1\k\y\j\9\2\r\n\v\6\e\9\r\a\3\y\3\j\0\j\5\b\y\x\1\e\f\b\2\o\y\3\q\j\8\0\9\b\4\o\8\j\x\p\5\1\x\w\6\p\8\a\w\f\w\k\a\j\5\k\9\e\3\0\5\i\u\r\y\y\0\5\d\a\7\0\e\5\1\9\x\m\e\u\9\l\v\j\2\b\0\i\3\x\v\e\n\9\b\g\a\s\h\3\o\c\n\7\7\n\3\5\c\4\2\q\9\s\2\g\p\5\f\w\n\9\4\e\w\r\x\j\4\g\a\8\j\c\e\x\3\k\o\l\b\s\4\r\h\r\k\1\4\4\l\a\v\i\h\y\s\n\f\e\f\t\n\5\1\p\b\b\g\o\r\u\6\w\x\i\7\y\6\4\h\d\k\i\b\6\5\1\5\f\u\0\2\3\7\1\d\9\8\b\p\b\9\y\9\0\l\x\f\5\9\a\f\r\6\a\1\a\6\8\0\t\k\2\9\c\3\6\9\j\g\m\o\4\b\g\4\3\h\c\w\3\n\3\9\g\1\s\o\g\l\7\3\t\0\k\m\u\n\7\c\g\1\b\w\l\9\8\r\y\y\u\p\i\y\7\e\o\d\x\g\s\f\g\u\q\p\6\b\9\w\u\x\8\9\m\v\o\3\h\1\c\q\7\5\s\a\s\v\h\7\8\8\l\s\e\5\1\k\p\d\2\x\j\l\g\y\t\4\g\v ]] 00:13:56.068 20:05:38 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:56.068 20:05:38 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:13:56.068 [2024-04-24 20:05:38.226038] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:56.068 [2024-04-24 20:05:38.226103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63548 ] 00:13:56.326 [2024-04-24 20:05:38.363575] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.326 [2024-04-24 20:05:38.461834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.586  Copying: 512/512 [B] (average 166 kBps) 00:13:56.586 00:13:56.587 20:05:38 -- dd/posix.sh@93 -- # [[ glncujntrzq6lf8w0q67j2x1gi5px7i5aclq0tynxuezkli590ul5e45a3woy4ag25m7hx0nk5o955lsbr2fsld514e8xgy8ewpab37jq4vy5mqog6h591w08ttcfj9wot2241vdqf4f02ijwlcpogl8cu33kwny58zwqpxbw440rbso71kyj92rnv6e9ra3y3j0j5byx1efb2oy3qj809b4o8jxp51xw6p8awfwkaj5k9e305iuryy05da70e519xmeu9lvj2b0i3xven9bgash3ocn77n35c42q9s2gp5fwn94ewrxj4ga8jcex3kolbs4rhrk144lavihysnfeftn51pbbgoru6wxi7y64hdkib6515fu02371d98bpb9y90lxf59afr6a1a680tk29c369jgmo4bg43hcw3n39g1sogl73t0kmun7cg1bwl98ryyupiy7eodxgsfguqp6b9wux89mvo3h1cq75sasvh788lse51kpd2xjlgyt4gv == \g\l\n\c\u\j\n\t\r\z\q\6\l\f\8\w\0\q\6\7\j\2\x\1\g\i\5\p\x\7\i\5\a\c\l\q\0\t\y\n\x\u\e\z\k\l\i\5\9\0\u\l\5\e\4\5\a\3\w\o\y\4\a\g\2\5\m\7\h\x\0\n\k\5\o\9\5\5\l\s\b\r\2\f\s\l\d\5\1\4\e\8\x\g\y\8\e\w\p\a\b\3\7\j\q\4\v\y\5\m\q\o\g\6\h\5\9\1\w\0\8\t\t\c\f\j\9\w\o\t\2\2\4\1\v\d\q\f\4\f\0\2\i\j\w\l\c\p\o\g\l\8\c\u\3\3\k\w\n\y\5\8\z\w\q\p\x\b\w\4\4\0\r\b\s\o\7\1\k\y\j\9\2\r\n\v\6\e\9\r\a\3\y\3\j\0\j\5\b\y\x\1\e\f\b\2\o\y\3\q\j\8\0\9\b\4\o\8\j\x\p\5\1\x\w\6\p\8\a\w\f\w\k\a\j\5\k\9\e\3\0\5\i\u\r\y\y\0\5\d\a\7\0\e\5\1\9\x\m\e\u\9\l\v\j\2\b\0\i\3\x\v\e\n\9\b\g\a\s\h\3\o\c\n\7\7\n\3\5\c\4\2\q\9\s\2\g\p\5\f\w\n\9\4\e\w\r\x\j\4\g\a\8\j\c\e\x\3\k\o\l\b\s\4\r\h\r\k\1\4\4\l\a\v\i\h\y\s\n\f\e\f\t\n\5\1\p\b\b\g\o\r\u\6\w\x\i\7\y\6\4\h\d\k\i\b\6\5\1\5\f\u\0\2\3\7\1\d\9\8\b\p\b\9\y\9\0\l\x\f\5\9\a\f\r\6\a\1\a\6\8\0\t\k\2\9\c\3\6\9\j\g\m\o\4\b\g\4\3\h\c\w\3\n\3\9\g\1\s\o\g\l\7\3\t\0\k\m\u\n\7\c\g\1\b\w\l\9\8\r\y\y\u\p\i\y\7\e\o\d\x\g\s\f\g\u\q\p\6\b\9\w\u\x\8\9\m\v\o\3\h\1\c\q\7\5\s\a\s\v\h\7\8\8\l\s\e\5\1\k\p\d\2\x\j\l\g\y\t\4\g\v ]] 00:13:56.587 20:05:38 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:56.587 20:05:38 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:13:56.587 [2024-04-24 20:05:38.810748] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:56.587 [2024-04-24 20:05:38.810820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63555 ] 00:13:56.845 [2024-04-24 20:05:38.950125] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.845 [2024-04-24 20:05:39.051347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.103  Copying: 512/512 [B] (average 250 kBps) 00:13:57.103 00:13:57.103 20:05:39 -- dd/posix.sh@93 -- # [[ glncujntrzq6lf8w0q67j2x1gi5px7i5aclq0tynxuezkli590ul5e45a3woy4ag25m7hx0nk5o955lsbr2fsld514e8xgy8ewpab37jq4vy5mqog6h591w08ttcfj9wot2241vdqf4f02ijwlcpogl8cu33kwny58zwqpxbw440rbso71kyj92rnv6e9ra3y3j0j5byx1efb2oy3qj809b4o8jxp51xw6p8awfwkaj5k9e305iuryy05da70e519xmeu9lvj2b0i3xven9bgash3ocn77n35c42q9s2gp5fwn94ewrxj4ga8jcex3kolbs4rhrk144lavihysnfeftn51pbbgoru6wxi7y64hdkib6515fu02371d98bpb9y90lxf59afr6a1a680tk29c369jgmo4bg43hcw3n39g1sogl73t0kmun7cg1bwl98ryyupiy7eodxgsfguqp6b9wux89mvo3h1cq75sasvh788lse51kpd2xjlgyt4gv == \g\l\n\c\u\j\n\t\r\z\q\6\l\f\8\w\0\q\6\7\j\2\x\1\g\i\5\p\x\7\i\5\a\c\l\q\0\t\y\n\x\u\e\z\k\l\i\5\9\0\u\l\5\e\4\5\a\3\w\o\y\4\a\g\2\5\m\7\h\x\0\n\k\5\o\9\5\5\l\s\b\r\2\f\s\l\d\5\1\4\e\8\x\g\y\8\e\w\p\a\b\3\7\j\q\4\v\y\5\m\q\o\g\6\h\5\9\1\w\0\8\t\t\c\f\j\9\w\o\t\2\2\4\1\v\d\q\f\4\f\0\2\i\j\w\l\c\p\o\g\l\8\c\u\3\3\k\w\n\y\5\8\z\w\q\p\x\b\w\4\4\0\r\b\s\o\7\1\k\y\j\9\2\r\n\v\6\e\9\r\a\3\y\3\j\0\j\5\b\y\x\1\e\f\b\2\o\y\3\q\j\8\0\9\b\4\o\8\j\x\p\5\1\x\w\6\p\8\a\w\f\w\k\a\j\5\k\9\e\3\0\5\i\u\r\y\y\0\5\d\a\7\0\e\5\1\9\x\m\e\u\9\l\v\j\2\b\0\i\3\x\v\e\n\9\b\g\a\s\h\3\o\c\n\7\7\n\3\5\c\4\2\q\9\s\2\g\p\5\f\w\n\9\4\e\w\r\x\j\4\g\a\8\j\c\e\x\3\k\o\l\b\s\4\r\h\r\k\1\4\4\l\a\v\i\h\y\s\n\f\e\f\t\n\5\1\p\b\b\g\o\r\u\6\w\x\i\7\y\6\4\h\d\k\i\b\6\5\1\5\f\u\0\2\3\7\1\d\9\8\b\p\b\9\y\9\0\l\x\f\5\9\a\f\r\6\a\1\a\6\8\0\t\k\2\9\c\3\6\9\j\g\m\o\4\b\g\4\3\h\c\w\3\n\3\9\g\1\s\o\g\l\7\3\t\0\k\m\u\n\7\c\g\1\b\w\l\9\8\r\y\y\u\p\i\y\7\e\o\d\x\g\s\f\g\u\q\p\6\b\9\w\u\x\8\9\m\v\o\3\h\1\c\q\7\5\s\a\s\v\h\7\8\8\l\s\e\5\1\k\p\d\2\x\j\l\g\y\t\4\g\v ]] 00:13:57.103 20:05:39 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:13:57.103 20:05:39 -- dd/posix.sh@86 -- # gen_bytes 512 00:13:57.103 20:05:39 -- dd/common.sh@98 -- # xtrace_disable 00:13:57.103 20:05:39 -- common/autotest_common.sh@10 -- # set +x 00:13:57.362 20:05:39 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:57.362 20:05:39 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:13:57.362 [2024-04-24 20:05:39.406905] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:57.362 [2024-04-24 20:05:39.406983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63563 ] 00:13:57.362 [2024-04-24 20:05:39.545561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.621 [2024-04-24 20:05:39.642983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.888  Copying: 512/512 [B] (average 500 kBps) 00:13:57.888 00:13:57.888 20:05:39 -- dd/posix.sh@93 -- # [[ dzvpc0xlxc2y79pr08on312b057zjrkgfxk9mspq4kiayigz4yo7q161e3ld1yw20ff9txqguslzyyqax11co64u81e1jqh9113ys1nrwl6wcwxhhjol44zxaohc9ki4fhwju6tdc4dsk4nr3mwihef8iqxhz7zc3627frpbrnf2nkwcm15oc84mfioiro0497alt474d5i5t40syu92uk73ypt6oprvesbg3dljvog7q27jgj8kcv8nn5pqg8lusysbzuvllxfzmug9w8j2axw79xazswr3lx0h235idwqgxvu8jnxpt5rp6t7ymd295tbjv9raorviyd5reayhhqxe7jskk6oil4a3cjii7aaejcihj90nxqmr4n0sp5ytcc7c8wc8mt22u13fqyb30s63ptvkogc4i89zj6omuw6baxl6y97014db62503raeaf2f1jbygmh4iwklbbjsn1plikluwe4xx1y5y5dfi20x2jwqebbwepdzwievilio == \d\z\v\p\c\0\x\l\x\c\2\y\7\9\p\r\0\8\o\n\3\1\2\b\0\5\7\z\j\r\k\g\f\x\k\9\m\s\p\q\4\k\i\a\y\i\g\z\4\y\o\7\q\1\6\1\e\3\l\d\1\y\w\2\0\f\f\9\t\x\q\g\u\s\l\z\y\y\q\a\x\1\1\c\o\6\4\u\8\1\e\1\j\q\h\9\1\1\3\y\s\1\n\r\w\l\6\w\c\w\x\h\h\j\o\l\4\4\z\x\a\o\h\c\9\k\i\4\f\h\w\j\u\6\t\d\c\4\d\s\k\4\n\r\3\m\w\i\h\e\f\8\i\q\x\h\z\7\z\c\3\6\2\7\f\r\p\b\r\n\f\2\n\k\w\c\m\1\5\o\c\8\4\m\f\i\o\i\r\o\0\4\9\7\a\l\t\4\7\4\d\5\i\5\t\4\0\s\y\u\9\2\u\k\7\3\y\p\t\6\o\p\r\v\e\s\b\g\3\d\l\j\v\o\g\7\q\2\7\j\g\j\8\k\c\v\8\n\n\5\p\q\g\8\l\u\s\y\s\b\z\u\v\l\l\x\f\z\m\u\g\9\w\8\j\2\a\x\w\7\9\x\a\z\s\w\r\3\l\x\0\h\2\3\5\i\d\w\q\g\x\v\u\8\j\n\x\p\t\5\r\p\6\t\7\y\m\d\2\9\5\t\b\j\v\9\r\a\o\r\v\i\y\d\5\r\e\a\y\h\h\q\x\e\7\j\s\k\k\6\o\i\l\4\a\3\c\j\i\i\7\a\a\e\j\c\i\h\j\9\0\n\x\q\m\r\4\n\0\s\p\5\y\t\c\c\7\c\8\w\c\8\m\t\2\2\u\1\3\f\q\y\b\3\0\s\6\3\p\t\v\k\o\g\c\4\i\8\9\z\j\6\o\m\u\w\6\b\a\x\l\6\y\9\7\0\1\4\d\b\6\2\5\0\3\r\a\e\a\f\2\f\1\j\b\y\g\m\h\4\i\w\k\l\b\b\j\s\n\1\p\l\i\k\l\u\w\e\4\x\x\1\y\5\y\5\d\f\i\2\0\x\2\j\w\q\e\b\b\w\e\p\d\z\w\i\e\v\i\l\i\o ]] 00:13:57.888 20:05:39 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:57.888 20:05:39 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:13:57.888 [2024-04-24 20:05:39.984069] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:57.888 [2024-04-24 20:05:39.984138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63570 ] 00:13:57.888 [2024-04-24 20:05:40.123160] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.163 [2024-04-24 20:05:40.223928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.422  Copying: 512/512 [B] (average 500 kBps) 00:13:58.422 00:13:58.422 20:05:40 -- dd/posix.sh@93 -- # [[ dzvpc0xlxc2y79pr08on312b057zjrkgfxk9mspq4kiayigz4yo7q161e3ld1yw20ff9txqguslzyyqax11co64u81e1jqh9113ys1nrwl6wcwxhhjol44zxaohc9ki4fhwju6tdc4dsk4nr3mwihef8iqxhz7zc3627frpbrnf2nkwcm15oc84mfioiro0497alt474d5i5t40syu92uk73ypt6oprvesbg3dljvog7q27jgj8kcv8nn5pqg8lusysbzuvllxfzmug9w8j2axw79xazswr3lx0h235idwqgxvu8jnxpt5rp6t7ymd295tbjv9raorviyd5reayhhqxe7jskk6oil4a3cjii7aaejcihj90nxqmr4n0sp5ytcc7c8wc8mt22u13fqyb30s63ptvkogc4i89zj6omuw6baxl6y97014db62503raeaf2f1jbygmh4iwklbbjsn1plikluwe4xx1y5y5dfi20x2jwqebbwepdzwievilio == \d\z\v\p\c\0\x\l\x\c\2\y\7\9\p\r\0\8\o\n\3\1\2\b\0\5\7\z\j\r\k\g\f\x\k\9\m\s\p\q\4\k\i\a\y\i\g\z\4\y\o\7\q\1\6\1\e\3\l\d\1\y\w\2\0\f\f\9\t\x\q\g\u\s\l\z\y\y\q\a\x\1\1\c\o\6\4\u\8\1\e\1\j\q\h\9\1\1\3\y\s\1\n\r\w\l\6\w\c\w\x\h\h\j\o\l\4\4\z\x\a\o\h\c\9\k\i\4\f\h\w\j\u\6\t\d\c\4\d\s\k\4\n\r\3\m\w\i\h\e\f\8\i\q\x\h\z\7\z\c\3\6\2\7\f\r\p\b\r\n\f\2\n\k\w\c\m\1\5\o\c\8\4\m\f\i\o\i\r\o\0\4\9\7\a\l\t\4\7\4\d\5\i\5\t\4\0\s\y\u\9\2\u\k\7\3\y\p\t\6\o\p\r\v\e\s\b\g\3\d\l\j\v\o\g\7\q\2\7\j\g\j\8\k\c\v\8\n\n\5\p\q\g\8\l\u\s\y\s\b\z\u\v\l\l\x\f\z\m\u\g\9\w\8\j\2\a\x\w\7\9\x\a\z\s\w\r\3\l\x\0\h\2\3\5\i\d\w\q\g\x\v\u\8\j\n\x\p\t\5\r\p\6\t\7\y\m\d\2\9\5\t\b\j\v\9\r\a\o\r\v\i\y\d\5\r\e\a\y\h\h\q\x\e\7\j\s\k\k\6\o\i\l\4\a\3\c\j\i\i\7\a\a\e\j\c\i\h\j\9\0\n\x\q\m\r\4\n\0\s\p\5\y\t\c\c\7\c\8\w\c\8\m\t\2\2\u\1\3\f\q\y\b\3\0\s\6\3\p\t\v\k\o\g\c\4\i\8\9\z\j\6\o\m\u\w\6\b\a\x\l\6\y\9\7\0\1\4\d\b\6\2\5\0\3\r\a\e\a\f\2\f\1\j\b\y\g\m\h\4\i\w\k\l\b\b\j\s\n\1\p\l\i\k\l\u\w\e\4\x\x\1\y\5\y\5\d\f\i\2\0\x\2\j\w\q\e\b\b\w\e\p\d\z\w\i\e\v\i\l\i\o ]] 00:13:58.422 20:05:40 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:58.422 20:05:40 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:13:58.422 [2024-04-24 20:05:40.557175] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:58.422 [2024-04-24 20:05:40.557247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63578 ] 00:13:58.680 [2024-04-24 20:05:40.693004] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.680 [2024-04-24 20:05:40.787272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.940  Copying: 512/512 [B] (average 250 kBps) 00:13:58.940 00:13:58.940 20:05:41 -- dd/posix.sh@93 -- # [[ dzvpc0xlxc2y79pr08on312b057zjrkgfxk9mspq4kiayigz4yo7q161e3ld1yw20ff9txqguslzyyqax11co64u81e1jqh9113ys1nrwl6wcwxhhjol44zxaohc9ki4fhwju6tdc4dsk4nr3mwihef8iqxhz7zc3627frpbrnf2nkwcm15oc84mfioiro0497alt474d5i5t40syu92uk73ypt6oprvesbg3dljvog7q27jgj8kcv8nn5pqg8lusysbzuvllxfzmug9w8j2axw79xazswr3lx0h235idwqgxvu8jnxpt5rp6t7ymd295tbjv9raorviyd5reayhhqxe7jskk6oil4a3cjii7aaejcihj90nxqmr4n0sp5ytcc7c8wc8mt22u13fqyb30s63ptvkogc4i89zj6omuw6baxl6y97014db62503raeaf2f1jbygmh4iwklbbjsn1plikluwe4xx1y5y5dfi20x2jwqebbwepdzwievilio == \d\z\v\p\c\0\x\l\x\c\2\y\7\9\p\r\0\8\o\n\3\1\2\b\0\5\7\z\j\r\k\g\f\x\k\9\m\s\p\q\4\k\i\a\y\i\g\z\4\y\o\7\q\1\6\1\e\3\l\d\1\y\w\2\0\f\f\9\t\x\q\g\u\s\l\z\y\y\q\a\x\1\1\c\o\6\4\u\8\1\e\1\j\q\h\9\1\1\3\y\s\1\n\r\w\l\6\w\c\w\x\h\h\j\o\l\4\4\z\x\a\o\h\c\9\k\i\4\f\h\w\j\u\6\t\d\c\4\d\s\k\4\n\r\3\m\w\i\h\e\f\8\i\q\x\h\z\7\z\c\3\6\2\7\f\r\p\b\r\n\f\2\n\k\w\c\m\1\5\o\c\8\4\m\f\i\o\i\r\o\0\4\9\7\a\l\t\4\7\4\d\5\i\5\t\4\0\s\y\u\9\2\u\k\7\3\y\p\t\6\o\p\r\v\e\s\b\g\3\d\l\j\v\o\g\7\q\2\7\j\g\j\8\k\c\v\8\n\n\5\p\q\g\8\l\u\s\y\s\b\z\u\v\l\l\x\f\z\m\u\g\9\w\8\j\2\a\x\w\7\9\x\a\z\s\w\r\3\l\x\0\h\2\3\5\i\d\w\q\g\x\v\u\8\j\n\x\p\t\5\r\p\6\t\7\y\m\d\2\9\5\t\b\j\v\9\r\a\o\r\v\i\y\d\5\r\e\a\y\h\h\q\x\e\7\j\s\k\k\6\o\i\l\4\a\3\c\j\i\i\7\a\a\e\j\c\i\h\j\9\0\n\x\q\m\r\4\n\0\s\p\5\y\t\c\c\7\c\8\w\c\8\m\t\2\2\u\1\3\f\q\y\b\3\0\s\6\3\p\t\v\k\o\g\c\4\i\8\9\z\j\6\o\m\u\w\6\b\a\x\l\6\y\9\7\0\1\4\d\b\6\2\5\0\3\r\a\e\a\f\2\f\1\j\b\y\g\m\h\4\i\w\k\l\b\b\j\s\n\1\p\l\i\k\l\u\w\e\4\x\x\1\y\5\y\5\d\f\i\2\0\x\2\j\w\q\e\b\b\w\e\p\d\z\w\i\e\v\i\l\i\o ]] 00:13:58.940 20:05:41 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:58.940 20:05:41 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:13:58.940 [2024-04-24 20:05:41.128186] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:13:58.940 [2024-04-24 20:05:41.128252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63585 ] 00:13:59.199 [2024-04-24 20:05:41.265050] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.199 [2024-04-24 20:05:41.359966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.458  Copying: 512/512 [B] (average 250 kBps) 00:13:59.458 00:13:59.458 20:05:41 -- dd/posix.sh@93 -- # [[ dzvpc0xlxc2y79pr08on312b057zjrkgfxk9mspq4kiayigz4yo7q161e3ld1yw20ff9txqguslzyyqax11co64u81e1jqh9113ys1nrwl6wcwxhhjol44zxaohc9ki4fhwju6tdc4dsk4nr3mwihef8iqxhz7zc3627frpbrnf2nkwcm15oc84mfioiro0497alt474d5i5t40syu92uk73ypt6oprvesbg3dljvog7q27jgj8kcv8nn5pqg8lusysbzuvllxfzmug9w8j2axw79xazswr3lx0h235idwqgxvu8jnxpt5rp6t7ymd295tbjv9raorviyd5reayhhqxe7jskk6oil4a3cjii7aaejcihj90nxqmr4n0sp5ytcc7c8wc8mt22u13fqyb30s63ptvkogc4i89zj6omuw6baxl6y97014db62503raeaf2f1jbygmh4iwklbbjsn1plikluwe4xx1y5y5dfi20x2jwqebbwepdzwievilio == \d\z\v\p\c\0\x\l\x\c\2\y\7\9\p\r\0\8\o\n\3\1\2\b\0\5\7\z\j\r\k\g\f\x\k\9\m\s\p\q\4\k\i\a\y\i\g\z\4\y\o\7\q\1\6\1\e\3\l\d\1\y\w\2\0\f\f\9\t\x\q\g\u\s\l\z\y\y\q\a\x\1\1\c\o\6\4\u\8\1\e\1\j\q\h\9\1\1\3\y\s\1\n\r\w\l\6\w\c\w\x\h\h\j\o\l\4\4\z\x\a\o\h\c\9\k\i\4\f\h\w\j\u\6\t\d\c\4\d\s\k\4\n\r\3\m\w\i\h\e\f\8\i\q\x\h\z\7\z\c\3\6\2\7\f\r\p\b\r\n\f\2\n\k\w\c\m\1\5\o\c\8\4\m\f\i\o\i\r\o\0\4\9\7\a\l\t\4\7\4\d\5\i\5\t\4\0\s\y\u\9\2\u\k\7\3\y\p\t\6\o\p\r\v\e\s\b\g\3\d\l\j\v\o\g\7\q\2\7\j\g\j\8\k\c\v\8\n\n\5\p\q\g\8\l\u\s\y\s\b\z\u\v\l\l\x\f\z\m\u\g\9\w\8\j\2\a\x\w\7\9\x\a\z\s\w\r\3\l\x\0\h\2\3\5\i\d\w\q\g\x\v\u\8\j\n\x\p\t\5\r\p\6\t\7\y\m\d\2\9\5\t\b\j\v\9\r\a\o\r\v\i\y\d\5\r\e\a\y\h\h\q\x\e\7\j\s\k\k\6\o\i\l\4\a\3\c\j\i\i\7\a\a\e\j\c\i\h\j\9\0\n\x\q\m\r\4\n\0\s\p\5\y\t\c\c\7\c\8\w\c\8\m\t\2\2\u\1\3\f\q\y\b\3\0\s\6\3\p\t\v\k\o\g\c\4\i\8\9\z\j\6\o\m\u\w\6\b\a\x\l\6\y\9\7\0\1\4\d\b\6\2\5\0\3\r\a\e\a\f\2\f\1\j\b\y\g\m\h\4\i\w\k\l\b\b\j\s\n\1\p\l\i\k\l\u\w\e\4\x\x\1\y\5\y\5\d\f\i\2\0\x\2\j\w\q\e\b\b\w\e\p\d\z\w\i\e\v\i\l\i\o ]] 00:13:59.458 00:13:59.458 real 0m4.636s 00:13:59.458 user 0m2.708s 00:13:59.458 sys 0m0.952s 00:13:59.458 20:05:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:59.458 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:13:59.458 ************************************ 00:13:59.458 END TEST dd_flags_misc_forced_aio 00:13:59.458 ************************************ 00:13:59.458 20:05:41 -- dd/posix.sh@1 -- # cleanup 00:13:59.458 20:05:41 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:13:59.459 20:05:41 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:13:59.718 ************************************ 00:13:59.718 END TEST spdk_dd_posix 00:13:59.718 ************************************ 00:13:59.718 00:13:59.718 real 0m21.930s 00:13:59.718 user 0m11.448s 00:13:59.718 sys 0m6.086s 00:13:59.718 20:05:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:59.718 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:13:59.718 20:05:41 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:13:59.718 20:05:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:59.718 20:05:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:59.718 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:13:59.718 ************************************ 00:13:59.718 START TEST spdk_dd_malloc 00:13:59.718 ************************************ 00:13:59.718 20:05:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:13:59.718 * Looking for test storage... 00:13:59.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:13:59.718 20:05:41 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:59.718 20:05:41 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.718 20:05:41 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.718 20:05:41 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.718 20:05:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.718 20:05:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.718 20:05:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.080 20:05:41 -- paths/export.sh@5 -- # export PATH 00:14:00.080 20:05:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.080 20:05:41 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:14:00.080 20:05:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:00.080 20:05:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:00.080 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:14:00.080 ************************************ 00:14:00.080 START TEST dd_malloc_copy 00:14:00.080 ************************************ 00:14:00.080 20:05:42 -- common/autotest_common.sh@1111 -- # malloc_copy 00:14:00.080 20:05:42 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:14:00.080 20:05:42 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:14:00.080 20:05:42 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:14:00.080 20:05:42 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:14:00.080 20:05:42 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:14:00.080 20:05:42 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:14:00.080 20:05:42 -- dd/malloc.sh@28 -- # gen_conf 00:14:00.080 20:05:42 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:14:00.080 20:05:42 -- dd/common.sh@31 -- # xtrace_disable 00:14:00.080 20:05:42 -- common/autotest_common.sh@10 -- # set +x 00:14:00.080 [2024-04-24 20:05:42.089315] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:00.080 [2024-04-24 20:05:42.089390] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63668 ] 00:14:00.080 { 00:14:00.080 "subsystems": [ 00:14:00.080 { 00:14:00.080 "subsystem": "bdev", 00:14:00.080 "config": [ 00:14:00.080 { 00:14:00.080 "params": { 00:14:00.080 "block_size": 512, 00:14:00.080 "num_blocks": 1048576, 00:14:00.080 "name": "malloc0" 00:14:00.080 }, 00:14:00.080 "method": "bdev_malloc_create" 00:14:00.080 }, 00:14:00.080 { 00:14:00.080 "params": { 00:14:00.080 "block_size": 512, 00:14:00.080 "num_blocks": 1048576, 00:14:00.080 "name": "malloc1" 00:14:00.080 }, 00:14:00.080 "method": "bdev_malloc_create" 00:14:00.080 }, 00:14:00.080 { 00:14:00.080 "method": "bdev_wait_for_examine" 00:14:00.080 } 00:14:00.080 ] 00:14:00.080 } 00:14:00.080 ] 00:14:00.080 } 00:14:00.080 [2024-04-24 20:05:42.236372] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.340 [2024-04-24 20:05:42.360640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.252  Copying: 233/512 [MB] (233 MBps) Copying: 462/512 [MB] (229 MBps) Copying: 512/512 [MB] (average 231 MBps) 00:14:03.252 00:14:03.252 20:05:45 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:14:03.252 20:05:45 -- dd/malloc.sh@33 -- # gen_conf 00:14:03.252 20:05:45 -- dd/common.sh@31 -- # xtrace_disable 00:14:03.252 20:05:45 -- common/autotest_common.sh@10 -- # set +x 00:14:03.252 [2024-04-24 20:05:45.434316] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:03.252 [2024-04-24 20:05:45.434487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63716 ] 00:14:03.252 { 00:14:03.252 "subsystems": [ 00:14:03.252 { 00:14:03.252 "subsystem": "bdev", 00:14:03.252 "config": [ 00:14:03.252 { 00:14:03.252 "params": { 00:14:03.252 "block_size": 512, 00:14:03.252 "num_blocks": 1048576, 00:14:03.252 "name": "malloc0" 00:14:03.252 }, 00:14:03.252 "method": "bdev_malloc_create" 00:14:03.252 }, 00:14:03.252 { 00:14:03.252 "params": { 00:14:03.252 "block_size": 512, 00:14:03.252 "num_blocks": 1048576, 00:14:03.252 "name": "malloc1" 00:14:03.252 }, 00:14:03.252 "method": "bdev_malloc_create" 00:14:03.252 }, 00:14:03.252 { 00:14:03.252 "method": "bdev_wait_for_examine" 00:14:03.252 } 00:14:03.252 ] 00:14:03.252 } 00:14:03.252 ] 00:14:03.252 } 00:14:03.509 [2024-04-24 20:05:45.572993] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.509 [2024-04-24 20:05:45.675105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.709  Copying: 236/512 [MB] (236 MBps) Copying: 478/512 [MB] (241 MBps) Copying: 512/512 [MB] (average 237 MBps) 00:14:06.709 00:14:06.709 ************************************ 00:14:06.709 END TEST dd_malloc_copy 00:14:06.709 ************************************ 00:14:06.709 00:14:06.709 real 0m6.604s 00:14:06.709 user 0m5.799s 00:14:06.709 sys 0m0.664s 00:14:06.709 20:05:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:06.709 20:05:48 -- common/autotest_common.sh@10 -- # set +x 00:14:06.709 ************************************ 00:14:06.709 END TEST spdk_dd_malloc 00:14:06.709 ************************************ 00:14:06.709 00:14:06.709 real 0m6.846s 00:14:06.709 user 0m5.889s 00:14:06.709 sys 0m0.812s 00:14:06.709 20:05:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:06.709 20:05:48 -- common/autotest_common.sh@10 -- # set +x 00:14:06.709 20:05:48 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:14:06.709 20:05:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:06.709 20:05:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.709 20:05:48 -- common/autotest_common.sh@10 -- # set +x 00:14:06.709 ************************************ 00:14:06.709 START TEST spdk_dd_bdev_to_bdev 00:14:06.709 ************************************ 00:14:06.709 20:05:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:14:06.709 * Looking for test storage... 00:14:06.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:06.709 20:05:48 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:06.709 20:05:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.709 20:05:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.709 20:05:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.709 20:05:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.709 20:05:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.709 20:05:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.709 20:05:48 -- paths/export.sh@5 -- # export PATH 00:14:06.709 20:05:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:06.709 20:05:48 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:14:06.967 20:05:48 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:14:06.967 20:05:48 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:14:06.967 20:05:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:06.967 20:05:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.967 20:05:48 -- common/autotest_common.sh@10 -- # set +x 00:14:06.967 ************************************ 00:14:06.967 START TEST dd_inflate_file 00:14:06.967 ************************************ 00:14:06.967 20:05:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:14:06.967 [2024-04-24 20:05:49.092053] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:06.967 [2024-04-24 20:05:49.092125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63824 ] 00:14:07.225 [2024-04-24 20:05:49.232260] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.225 [2024-04-24 20:05:49.337020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.483  Copying: 64/64 [MB] (average 1641 MBps) 00:14:07.483 00:14:07.483 00:14:07.483 real 0m0.616s 00:14:07.483 user 0m0.389s 00:14:07.483 sys 0m0.269s 00:14:07.483 20:05:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:07.483 20:05:49 -- common/autotest_common.sh@10 -- # set +x 00:14:07.483 ************************************ 00:14:07.483 END TEST dd_inflate_file 00:14:07.483 ************************************ 00:14:07.483 20:05:49 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:14:07.483 20:05:49 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:14:07.483 20:05:49 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:14:07.483 20:05:49 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:14:07.483 20:05:49 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:14:07.483 20:05:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:07.483 20:05:49 -- dd/common.sh@31 -- # xtrace_disable 00:14:07.483 20:05:49 -- common/autotest_common.sh@10 -- # set +x 00:14:07.483 20:05:49 -- common/autotest_common.sh@10 -- # set +x 00:14:07.741 { 00:14:07.741 "subsystems": [ 00:14:07.741 { 00:14:07.741 "subsystem": "bdev", 00:14:07.741 "config": [ 00:14:07.741 { 00:14:07.741 "params": { 00:14:07.741 "trtype": "pcie", 00:14:07.741 "traddr": "0000:00:10.0", 00:14:07.741 "name": "Nvme0" 00:14:07.741 }, 00:14:07.741 "method": "bdev_nvme_attach_controller" 00:14:07.741 }, 00:14:07.741 { 00:14:07.741 "params": { 00:14:07.741 "trtype": "pcie", 00:14:07.741 "traddr": "0000:00:11.0", 00:14:07.741 "name": "Nvme1" 00:14:07.741 }, 00:14:07.741 "method": "bdev_nvme_attach_controller" 00:14:07.741 }, 00:14:07.741 { 00:14:07.741 "method": "bdev_wait_for_examine" 00:14:07.741 } 00:14:07.741 ] 00:14:07.741 } 00:14:07.741 ] 00:14:07.741 } 00:14:07.741 ************************************ 00:14:07.741 START TEST dd_copy_to_out_bdev 00:14:07.741 ************************************ 00:14:07.741 20:05:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:14:07.741 [2024-04-24 20:05:49.840211] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:07.741 [2024-04-24 20:05:49.840290] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63870 ] 00:14:07.741 [2024-04-24 20:05:49.977660] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.000 [2024-04-24 20:05:50.080119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.192  Copying: 64/64 [MB] (average 79 MBps) 00:14:09.192 00:14:09.192 00:14:09.192 real 0m1.632s 00:14:09.192 user 0m1.342s 00:14:09.192 sys 0m1.215s 00:14:09.192 ************************************ 00:14:09.192 END TEST dd_copy_to_out_bdev 00:14:09.192 ************************************ 00:14:09.192 20:05:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:09.192 20:05:51 -- common/autotest_common.sh@10 -- # set +x 00:14:09.458 20:05:51 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:14:09.458 20:05:51 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:14:09.458 20:05:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:09.458 20:05:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:09.458 20:05:51 -- common/autotest_common.sh@10 -- # set +x 00:14:09.458 ************************************ 00:14:09.458 START TEST dd_offset_magic 00:14:09.458 ************************************ 00:14:09.458 20:05:51 -- common/autotest_common.sh@1111 -- # offset_magic 00:14:09.458 20:05:51 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:14:09.458 20:05:51 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:14:09.458 20:05:51 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:14:09.458 20:05:51 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:14:09.458 20:05:51 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:14:09.458 20:05:51 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:14:09.458 20:05:51 -- dd/common.sh@31 -- # xtrace_disable 00:14:09.458 20:05:51 -- common/autotest_common.sh@10 -- # set +x 00:14:09.458 [2024-04-24 20:05:51.583065] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:09.458 [2024-04-24 20:05:51.583134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63916 ] 00:14:09.458 { 00:14:09.458 "subsystems": [ 00:14:09.458 { 00:14:09.458 "subsystem": "bdev", 00:14:09.458 "config": [ 00:14:09.458 { 00:14:09.458 "params": { 00:14:09.458 "trtype": "pcie", 00:14:09.458 "traddr": "0000:00:10.0", 00:14:09.458 "name": "Nvme0" 00:14:09.458 }, 00:14:09.458 "method": "bdev_nvme_attach_controller" 00:14:09.458 }, 00:14:09.458 { 00:14:09.458 "params": { 00:14:09.458 "trtype": "pcie", 00:14:09.458 "traddr": "0000:00:11.0", 00:14:09.458 "name": "Nvme1" 00:14:09.458 }, 00:14:09.458 "method": "bdev_nvme_attach_controller" 00:14:09.458 }, 00:14:09.458 { 00:14:09.458 "method": "bdev_wait_for_examine" 00:14:09.458 } 00:14:09.458 ] 00:14:09.458 } 00:14:09.458 ] 00:14:09.458 } 00:14:09.729 [2024-04-24 20:05:51.719789] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.729 [2024-04-24 20:05:51.818488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.248  Copying: 65/65 [MB] (average 670 MBps) 00:14:10.248 00:14:10.248 20:05:52 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:14:10.248 20:05:52 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:14:10.248 20:05:52 -- dd/common.sh@31 -- # xtrace_disable 00:14:10.248 20:05:52 -- common/autotest_common.sh@10 -- # set +x 00:14:10.248 [2024-04-24 20:05:52.446196] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:10.248 [2024-04-24 20:05:52.446271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63926 ] 00:14:10.248 { 00:14:10.248 "subsystems": [ 00:14:10.248 { 00:14:10.248 "subsystem": "bdev", 00:14:10.248 "config": [ 00:14:10.248 { 00:14:10.248 "params": { 00:14:10.248 "trtype": "pcie", 00:14:10.248 "traddr": "0000:00:10.0", 00:14:10.248 "name": "Nvme0" 00:14:10.248 }, 00:14:10.248 "method": "bdev_nvme_attach_controller" 00:14:10.248 }, 00:14:10.248 { 00:14:10.248 "params": { 00:14:10.248 "trtype": "pcie", 00:14:10.248 "traddr": "0000:00:11.0", 00:14:10.248 "name": "Nvme1" 00:14:10.248 }, 00:14:10.248 "method": "bdev_nvme_attach_controller" 00:14:10.248 }, 00:14:10.248 { 00:14:10.248 "method": "bdev_wait_for_examine" 00:14:10.248 } 00:14:10.248 ] 00:14:10.248 } 00:14:10.248 ] 00:14:10.248 } 00:14:10.508 [2024-04-24 20:05:52.584869] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.508 [2024-04-24 20:05:52.679273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.026  Copying: 1024/1024 [kB] (average 500 MBps) 00:14:11.026 00:14:11.026 20:05:53 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:14:11.026 20:05:53 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:14:11.026 20:05:53 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:14:11.026 20:05:53 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:14:11.026 20:05:53 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:14:11.026 20:05:53 -- dd/common.sh@31 -- # xtrace_disable 00:14:11.026 20:05:53 -- common/autotest_common.sh@10 -- # set +x 00:14:11.026 [2024-04-24 20:05:53.153434] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:11.026 [2024-04-24 20:05:53.153538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63948 ] 00:14:11.026 { 00:14:11.026 "subsystems": [ 00:14:11.026 { 00:14:11.026 "subsystem": "bdev", 00:14:11.026 "config": [ 00:14:11.026 { 00:14:11.026 "params": { 00:14:11.026 "trtype": "pcie", 00:14:11.026 "traddr": "0000:00:10.0", 00:14:11.026 "name": "Nvme0" 00:14:11.026 }, 00:14:11.026 "method": "bdev_nvme_attach_controller" 00:14:11.026 }, 00:14:11.026 { 00:14:11.026 "params": { 00:14:11.026 "trtype": "pcie", 00:14:11.026 "traddr": "0000:00:11.0", 00:14:11.026 "name": "Nvme1" 00:14:11.026 }, 00:14:11.026 "method": "bdev_nvme_attach_controller" 00:14:11.026 }, 00:14:11.026 { 00:14:11.026 "method": "bdev_wait_for_examine" 00:14:11.026 } 00:14:11.026 ] 00:14:11.026 } 00:14:11.026 ] 00:14:11.026 } 00:14:11.285 [2024-04-24 20:05:53.291750] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.285 [2024-04-24 20:05:53.391111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.803  Copying: 65/65 [MB] (average 747 MBps) 00:14:11.803 00:14:11.803 20:05:54 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:14:11.803 20:05:54 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:14:11.803 20:05:54 -- dd/common.sh@31 -- # xtrace_disable 00:14:11.803 20:05:54 -- common/autotest_common.sh@10 -- # set +x 00:14:12.062 [2024-04-24 20:05:54.077237] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:12.062 [2024-04-24 20:05:54.077302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63968 ] 00:14:12.062 { 00:14:12.062 "subsystems": [ 00:14:12.062 { 00:14:12.062 "subsystem": "bdev", 00:14:12.062 "config": [ 00:14:12.062 { 00:14:12.062 "params": { 00:14:12.062 "trtype": "pcie", 00:14:12.062 "traddr": "0000:00:10.0", 00:14:12.062 "name": "Nvme0" 00:14:12.062 }, 00:14:12.062 "method": "bdev_nvme_attach_controller" 00:14:12.062 }, 00:14:12.062 { 00:14:12.062 "params": { 00:14:12.062 "trtype": "pcie", 00:14:12.062 "traddr": "0000:00:11.0", 00:14:12.062 "name": "Nvme1" 00:14:12.062 }, 00:14:12.062 "method": "bdev_nvme_attach_controller" 00:14:12.062 }, 00:14:12.062 { 00:14:12.062 "method": "bdev_wait_for_examine" 00:14:12.062 } 00:14:12.062 ] 00:14:12.062 } 00:14:12.062 ] 00:14:12.062 } 00:14:12.062 [2024-04-24 20:05:54.214649] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.062 [2024-04-24 20:05:54.312213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.580  Copying: 1024/1024 [kB] (average 500 MBps) 00:14:12.580 00:14:12.580 ************************************ 00:14:12.580 END TEST dd_offset_magic 00:14:12.580 ************************************ 00:14:12.580 20:05:54 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:14:12.580 20:05:54 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:14:12.580 00:14:12.580 real 0m3.202s 00:14:12.580 user 0m2.414s 00:14:12.580 sys 0m0.830s 00:14:12.580 20:05:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:12.580 20:05:54 -- common/autotest_common.sh@10 -- # set +x 00:14:12.580 20:05:54 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:14:12.580 20:05:54 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:14:12.580 20:05:54 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:14:12.580 20:05:54 -- dd/common.sh@11 -- # local nvme_ref= 00:14:12.580 20:05:54 -- dd/common.sh@12 -- # local size=4194330 00:14:12.580 20:05:54 -- dd/common.sh@14 -- # local bs=1048576 00:14:12.580 20:05:54 -- dd/common.sh@15 -- # local count=5 00:14:12.580 20:05:54 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:14:12.580 20:05:54 -- dd/common.sh@18 -- # gen_conf 00:14:12.580 20:05:54 -- dd/common.sh@31 -- # xtrace_disable 00:14:12.580 20:05:54 -- common/autotest_common.sh@10 -- # set +x 00:14:12.840 [2024-04-24 20:05:54.840423] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:12.840 [2024-04-24 20:05:54.840551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64014 ] 00:14:12.840 { 00:14:12.840 "subsystems": [ 00:14:12.840 { 00:14:12.840 "subsystem": "bdev", 00:14:12.840 "config": [ 00:14:12.840 { 00:14:12.840 "params": { 00:14:12.840 "trtype": "pcie", 00:14:12.840 "traddr": "0000:00:10.0", 00:14:12.840 "name": "Nvme0" 00:14:12.840 }, 00:14:12.840 "method": "bdev_nvme_attach_controller" 00:14:12.840 }, 00:14:12.840 { 00:14:12.840 "params": { 00:14:12.840 "trtype": "pcie", 00:14:12.840 "traddr": "0000:00:11.0", 00:14:12.840 "name": "Nvme1" 00:14:12.840 }, 00:14:12.840 "method": "bdev_nvme_attach_controller" 00:14:12.840 }, 00:14:12.840 { 00:14:12.840 "method": "bdev_wait_for_examine" 00:14:12.840 } 00:14:12.840 ] 00:14:12.840 } 00:14:12.840 ] 00:14:12.840 } 00:14:12.840 [2024-04-24 20:05:54.979602] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.840 [2024-04-24 20:05:55.081416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.358  Copying: 5120/5120 [kB] (average 1250 MBps) 00:14:13.358 00:14:13.358 20:05:55 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:14:13.358 20:05:55 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:14:13.358 20:05:55 -- dd/common.sh@11 -- # local nvme_ref= 00:14:13.358 20:05:55 -- dd/common.sh@12 -- # local size=4194330 00:14:13.358 20:05:55 -- dd/common.sh@14 -- # local bs=1048576 00:14:13.358 20:05:55 -- dd/common.sh@15 -- # local count=5 00:14:13.358 20:05:55 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:14:13.358 20:05:55 -- dd/common.sh@18 -- # gen_conf 00:14:13.358 20:05:55 -- dd/common.sh@31 -- # xtrace_disable 00:14:13.358 20:05:55 -- common/autotest_common.sh@10 -- # set +x 00:14:13.358 [2024-04-24 20:05:55.546372] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:13.358 [2024-04-24 20:05:55.546508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64025 ] 00:14:13.358 { 00:14:13.358 "subsystems": [ 00:14:13.358 { 00:14:13.358 "subsystem": "bdev", 00:14:13.358 "config": [ 00:14:13.358 { 00:14:13.358 "params": { 00:14:13.358 "trtype": "pcie", 00:14:13.358 "traddr": "0000:00:10.0", 00:14:13.358 "name": "Nvme0" 00:14:13.358 }, 00:14:13.358 "method": "bdev_nvme_attach_controller" 00:14:13.358 }, 00:14:13.358 { 00:14:13.358 "params": { 00:14:13.358 "trtype": "pcie", 00:14:13.358 "traddr": "0000:00:11.0", 00:14:13.358 "name": "Nvme1" 00:14:13.358 }, 00:14:13.358 "method": "bdev_nvme_attach_controller" 00:14:13.358 }, 00:14:13.358 { 00:14:13.358 "method": "bdev_wait_for_examine" 00:14:13.358 } 00:14:13.358 ] 00:14:13.358 } 00:14:13.358 ] 00:14:13.358 } 00:14:13.618 [2024-04-24 20:05:55.684704] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.618 [2024-04-24 20:05:55.780552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.167  Copying: 5120/5120 [kB] (average 625 MBps) 00:14:14.167 00:14:14.167 20:05:56 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:14:14.167 ************************************ 00:14:14.167 END TEST spdk_dd_bdev_to_bdev 00:14:14.167 ************************************ 00:14:14.167 00:14:14.167 real 0m7.409s 00:14:14.167 user 0m5.437s 00:14:14.167 sys 0m3.119s 00:14:14.167 20:05:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:14.167 20:05:56 -- common/autotest_common.sh@10 -- # set +x 00:14:14.167 20:05:56 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:14:14.167 20:05:56 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:14:14.167 20:05:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:14.167 20:05:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:14.167 20:05:56 -- common/autotest_common.sh@10 -- # set +x 00:14:14.167 ************************************ 00:14:14.167 START TEST spdk_dd_uring 00:14:14.167 ************************************ 00:14:14.167 20:05:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:14:14.428 * Looking for test storage... 00:14:14.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:14.428 20:05:56 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:14.428 20:05:56 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.428 20:05:56 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.428 20:05:56 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.428 20:05:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.428 20:05:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.428 20:05:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.428 20:05:56 -- paths/export.sh@5 -- # export PATH 00:14:14.428 20:05:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.428 20:05:56 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:14:14.428 20:05:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:14.428 20:05:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:14.429 20:05:56 -- common/autotest_common.sh@10 -- # set +x 00:14:14.429 ************************************ 00:14:14.429 START TEST dd_uring_copy 00:14:14.429 ************************************ 00:14:14.429 20:05:56 -- common/autotest_common.sh@1111 -- # uring_zram_copy 00:14:14.429 20:05:56 -- dd/uring.sh@15 -- # local zram_dev_id 00:14:14.429 20:05:56 -- dd/uring.sh@16 -- # local magic 00:14:14.429 20:05:56 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:14:14.429 20:05:56 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:14:14.429 20:05:56 -- dd/uring.sh@19 -- # local verify_magic 00:14:14.429 20:05:56 -- dd/uring.sh@21 -- # init_zram 00:14:14.429 20:05:56 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:14:14.429 20:05:56 -- dd/common.sh@164 -- # return 00:14:14.429 20:05:56 -- dd/uring.sh@22 -- # create_zram_dev 00:14:14.429 20:05:56 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:14:14.429 20:05:56 -- dd/uring.sh@22 -- # zram_dev_id=1 00:14:14.429 20:05:56 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:14:14.429 20:05:56 -- dd/common.sh@181 -- # local id=1 00:14:14.429 20:05:56 -- dd/common.sh@182 -- # local size=512M 00:14:14.429 20:05:56 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:14:14.429 20:05:56 -- dd/common.sh@186 -- # echo 512M 00:14:14.429 20:05:56 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:14:14.429 20:05:56 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:14:14.429 20:05:56 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:14:14.429 20:05:56 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:14:14.429 20:05:56 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:14:14.429 20:05:56 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:14:14.429 20:05:56 -- dd/uring.sh@41 -- # gen_bytes 1024 00:14:14.429 20:05:56 -- dd/common.sh@98 -- # xtrace_disable 00:14:14.429 20:05:56 -- common/autotest_common.sh@10 -- # set +x 00:14:14.429 20:05:56 -- dd/uring.sh@41 -- # magic=66lvzz2zrd0vpum5qkypi9e6tzh71d1vkskc50d7742yalrm08m6vecmrf4y3ug0adm9qdkqpxr819egnc2dpc4g3g9fvtm2za7eaz8vt28w4d01588fnee8hdwupv8fb0dioyq1ok8ctys4zzlhc82bwcb5316gvls66limbbkaw4lfki6b4pc6pl2jn2r9b92u9ch16e5e36yvlhbz8fy3eiginhmori2pwspgnkom87yr0pqyi6c79rsox9e16kg0bkurfnbwj0qv68wntoe5h5jdplqsoba34txm265fvan5dw0281t1wcm6qdpz14u75yaplwftqs8touuz1k4vd43dyxi7fl8z35be157cyj69wisxg2rj280rl5mp47zuskq995h79ndzzzdoh0cxnq1r4o6rieyo4waovi916bnzxfbqr1a2zlfggasxva652e1matbaru1we55wsvw25oovwaubjfr8gyilscf024eptpnt43h8hl5x7qyj78l8ipd48k15riscwprydojxjji9p7emsbyhexnrsyxnhfepna2gr0jon194rqsknnrg3fapc22xz8yeh6abpvr6i4sgx2nijs11ighbo42fggj0bc5zise9m8w9mqegp1bjghwidf9kzoslaue84v4g0exfhdtnh01qavuycf5jykjqbnyywo54kozmn45jxqbepm0n0mbw73yjbg5l5it3ad8yy932nla7g3y3m27s4j0iah5bwslab6x81wb6njg8yer4jo10wlhezoxvs6rd7q8vhdhgsdmlymszm1e7sjx2ndjc539p12u97stzp9ekj0dbc8fj0vcni64omp39zf6mxqepkgj9clsmt1rh925wwjl15hdfstedv5u1tovppduur6twuekwd8jqkdypdc9h6s1j6jftq7dds2cw66mux7d4yzap69t38ka042d4z53skkbf9pgx5sqfhgbgungdn2vcal1xdx1uix5u5jypzvguzqz7t9ztrhe1 00:14:14.429 20:05:56 -- dd/uring.sh@42 -- # echo 66lvzz2zrd0vpum5qkypi9e6tzh71d1vkskc50d7742yalrm08m6vecmrf4y3ug0adm9qdkqpxr819egnc2dpc4g3g9fvtm2za7eaz8vt28w4d01588fnee8hdwupv8fb0dioyq1ok8ctys4zzlhc82bwcb5316gvls66limbbkaw4lfki6b4pc6pl2jn2r9b92u9ch16e5e36yvlhbz8fy3eiginhmori2pwspgnkom87yr0pqyi6c79rsox9e16kg0bkurfnbwj0qv68wntoe5h5jdplqsoba34txm265fvan5dw0281t1wcm6qdpz14u75yaplwftqs8touuz1k4vd43dyxi7fl8z35be157cyj69wisxg2rj280rl5mp47zuskq995h79ndzzzdoh0cxnq1r4o6rieyo4waovi916bnzxfbqr1a2zlfggasxva652e1matbaru1we55wsvw25oovwaubjfr8gyilscf024eptpnt43h8hl5x7qyj78l8ipd48k15riscwprydojxjji9p7emsbyhexnrsyxnhfepna2gr0jon194rqsknnrg3fapc22xz8yeh6abpvr6i4sgx2nijs11ighbo42fggj0bc5zise9m8w9mqegp1bjghwidf9kzoslaue84v4g0exfhdtnh01qavuycf5jykjqbnyywo54kozmn45jxqbepm0n0mbw73yjbg5l5it3ad8yy932nla7g3y3m27s4j0iah5bwslab6x81wb6njg8yer4jo10wlhezoxvs6rd7q8vhdhgsdmlymszm1e7sjx2ndjc539p12u97stzp9ekj0dbc8fj0vcni64omp39zf6mxqepkgj9clsmt1rh925wwjl15hdfstedv5u1tovppduur6twuekwd8jqkdypdc9h6s1j6jftq7dds2cw66mux7d4yzap69t38ka042d4z53skkbf9pgx5sqfhgbgungdn2vcal1xdx1uix5u5jypzvguzqz7t9ztrhe1 00:14:14.429 20:05:56 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:14:14.687 [2024-04-24 20:05:56.703166] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:14.687 [2024-04-24 20:05:56.703247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64111 ] 00:14:14.687 [2024-04-24 20:05:56.847177] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.946 [2024-04-24 20:05:56.949645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.770  Copying: 511/511 [MB] (average 1595 MBps) 00:14:15.770 00:14:15.770 20:05:57 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:14:15.770 20:05:57 -- dd/uring.sh@54 -- # gen_conf 00:14:15.770 20:05:57 -- dd/common.sh@31 -- # xtrace_disable 00:14:15.770 20:05:57 -- common/autotest_common.sh@10 -- # set +x 00:14:15.770 [2024-04-24 20:05:57.921299] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:15.770 [2024-04-24 20:05:57.921689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64127 ] 00:14:15.770 { 00:14:15.770 "subsystems": [ 00:14:15.770 { 00:14:15.770 "subsystem": "bdev", 00:14:15.770 "config": [ 00:14:15.770 { 00:14:15.770 "params": { 00:14:15.770 "block_size": 512, 00:14:15.770 "num_blocks": 1048576, 00:14:15.770 "name": "malloc0" 00:14:15.770 }, 00:14:15.770 "method": "bdev_malloc_create" 00:14:15.770 }, 00:14:15.770 { 00:14:15.770 "params": { 00:14:15.770 "filename": "/dev/zram1", 00:14:15.770 "name": "uring0" 00:14:15.770 }, 00:14:15.770 "method": "bdev_uring_create" 00:14:15.770 }, 00:14:15.770 { 00:14:15.770 "method": "bdev_wait_for_examine" 00:14:15.770 } 00:14:15.770 ] 00:14:15.770 } 00:14:15.770 ] 00:14:15.770 } 00:14:16.029 [2024-04-24 20:05:58.051432] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.029 [2024-04-24 20:05:58.154761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.602  Copying: 266/512 [MB] (266 MBps) Copying: 512/512 [MB] (average 271 MBps) 00:14:18.602 00:14:18.602 20:06:00 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:14:18.602 20:06:00 -- dd/uring.sh@60 -- # gen_conf 00:14:18.602 20:06:00 -- dd/common.sh@31 -- # xtrace_disable 00:14:18.602 20:06:00 -- common/autotest_common.sh@10 -- # set +x 00:14:18.602 [2024-04-24 20:06:00.677384] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:18.602 [2024-04-24 20:06:00.677478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64165 ] 00:14:18.602 { 00:14:18.602 "subsystems": [ 00:14:18.602 { 00:14:18.602 "subsystem": "bdev", 00:14:18.602 "config": [ 00:14:18.603 { 00:14:18.603 "params": { 00:14:18.603 "block_size": 512, 00:14:18.603 "num_blocks": 1048576, 00:14:18.603 "name": "malloc0" 00:14:18.603 }, 00:14:18.603 "method": "bdev_malloc_create" 00:14:18.603 }, 00:14:18.603 { 00:14:18.603 "params": { 00:14:18.603 "filename": "/dev/zram1", 00:14:18.603 "name": "uring0" 00:14:18.603 }, 00:14:18.603 "method": "bdev_uring_create" 00:14:18.603 }, 00:14:18.603 { 00:14:18.603 "method": "bdev_wait_for_examine" 00:14:18.603 } 00:14:18.603 ] 00:14:18.603 } 00:14:18.603 ] 00:14:18.603 } 00:14:18.603 [2024-04-24 20:06:00.815002] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.862 [2024-04-24 20:06:00.921576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.103  Copying: 193/512 [MB] (193 MBps) Copying: 389/512 [MB] (196 MBps) Copying: 512/512 [MB] (average 192 MBps) 00:14:22.103 00:14:22.104 20:06:04 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:14:22.104 20:06:04 -- dd/uring.sh@66 -- # [[ 66lvzz2zrd0vpum5qkypi9e6tzh71d1vkskc50d7742yalrm08m6vecmrf4y3ug0adm9qdkqpxr819egnc2dpc4g3g9fvtm2za7eaz8vt28w4d01588fnee8hdwupv8fb0dioyq1ok8ctys4zzlhc82bwcb5316gvls66limbbkaw4lfki6b4pc6pl2jn2r9b92u9ch16e5e36yvlhbz8fy3eiginhmori2pwspgnkom87yr0pqyi6c79rsox9e16kg0bkurfnbwj0qv68wntoe5h5jdplqsoba34txm265fvan5dw0281t1wcm6qdpz14u75yaplwftqs8touuz1k4vd43dyxi7fl8z35be157cyj69wisxg2rj280rl5mp47zuskq995h79ndzzzdoh0cxnq1r4o6rieyo4waovi916bnzxfbqr1a2zlfggasxva652e1matbaru1we55wsvw25oovwaubjfr8gyilscf024eptpnt43h8hl5x7qyj78l8ipd48k15riscwprydojxjji9p7emsbyhexnrsyxnhfepna2gr0jon194rqsknnrg3fapc22xz8yeh6abpvr6i4sgx2nijs11ighbo42fggj0bc5zise9m8w9mqegp1bjghwidf9kzoslaue84v4g0exfhdtnh01qavuycf5jykjqbnyywo54kozmn45jxqbepm0n0mbw73yjbg5l5it3ad8yy932nla7g3y3m27s4j0iah5bwslab6x81wb6njg8yer4jo10wlhezoxvs6rd7q8vhdhgsdmlymszm1e7sjx2ndjc539p12u97stzp9ekj0dbc8fj0vcni64omp39zf6mxqepkgj9clsmt1rh925wwjl15hdfstedv5u1tovppduur6twuekwd8jqkdypdc9h6s1j6jftq7dds2cw66mux7d4yzap69t38ka042d4z53skkbf9pgx5sqfhgbgungdn2vcal1xdx1uix5u5jypzvguzqz7t9ztrhe1 == \6\6\l\v\z\z\2\z\r\d\0\v\p\u\m\5\q\k\y\p\i\9\e\6\t\z\h\7\1\d\1\v\k\s\k\c\5\0\d\7\7\4\2\y\a\l\r\m\0\8\m\6\v\e\c\m\r\f\4\y\3\u\g\0\a\d\m\9\q\d\k\q\p\x\r\8\1\9\e\g\n\c\2\d\p\c\4\g\3\g\9\f\v\t\m\2\z\a\7\e\a\z\8\v\t\2\8\w\4\d\0\1\5\8\8\f\n\e\e\8\h\d\w\u\p\v\8\f\b\0\d\i\o\y\q\1\o\k\8\c\t\y\s\4\z\z\l\h\c\8\2\b\w\c\b\5\3\1\6\g\v\l\s\6\6\l\i\m\b\b\k\a\w\4\l\f\k\i\6\b\4\p\c\6\p\l\2\j\n\2\r\9\b\9\2\u\9\c\h\1\6\e\5\e\3\6\y\v\l\h\b\z\8\f\y\3\e\i\g\i\n\h\m\o\r\i\2\p\w\s\p\g\n\k\o\m\8\7\y\r\0\p\q\y\i\6\c\7\9\r\s\o\x\9\e\1\6\k\g\0\b\k\u\r\f\n\b\w\j\0\q\v\6\8\w\n\t\o\e\5\h\5\j\d\p\l\q\s\o\b\a\3\4\t\x\m\2\6\5\f\v\a\n\5\d\w\0\2\8\1\t\1\w\c\m\6\q\d\p\z\1\4\u\7\5\y\a\p\l\w\f\t\q\s\8\t\o\u\u\z\1\k\4\v\d\4\3\d\y\x\i\7\f\l\8\z\3\5\b\e\1\5\7\c\y\j\6\9\w\i\s\x\g\2\r\j\2\8\0\r\l\5\m\p\4\7\z\u\s\k\q\9\9\5\h\7\9\n\d\z\z\z\d\o\h\0\c\x\n\q\1\r\4\o\6\r\i\e\y\o\4\w\a\o\v\i\9\1\6\b\n\z\x\f\b\q\r\1\a\2\z\l\f\g\g\a\s\x\v\a\6\5\2\e\1\m\a\t\b\a\r\u\1\w\e\5\5\w\s\v\w\2\5\o\o\v\w\a\u\b\j\f\r\8\g\y\i\l\s\c\f\0\2\4\e\p\t\p\n\t\4\3\h\8\h\l\5\x\7\q\y\j\7\8\l\8\i\p\d\4\8\k\1\5\r\i\s\c\w\p\r\y\d\o\j\x\j\j\i\9\p\7\e\m\s\b\y\h\e\x\n\r\s\y\x\n\h\f\e\p\n\a\2\g\r\0\j\o\n\1\9\4\r\q\s\k\n\n\r\g\3\f\a\p\c\2\2\x\z\8\y\e\h\6\a\b\p\v\r\6\i\4\s\g\x\2\n\i\j\s\1\1\i\g\h\b\o\4\2\f\g\g\j\0\b\c\5\z\i\s\e\9\m\8\w\9\m\q\e\g\p\1\b\j\g\h\w\i\d\f\9\k\z\o\s\l\a\u\e\8\4\v\4\g\0\e\x\f\h\d\t\n\h\0\1\q\a\v\u\y\c\f\5\j\y\k\j\q\b\n\y\y\w\o\5\4\k\o\z\m\n\4\5\j\x\q\b\e\p\m\0\n\0\m\b\w\7\3\y\j\b\g\5\l\5\i\t\3\a\d\8\y\y\9\3\2\n\l\a\7\g\3\y\3\m\2\7\s\4\j\0\i\a\h\5\b\w\s\l\a\b\6\x\8\1\w\b\6\n\j\g\8\y\e\r\4\j\o\1\0\w\l\h\e\z\o\x\v\s\6\r\d\7\q\8\v\h\d\h\g\s\d\m\l\y\m\s\z\m\1\e\7\s\j\x\2\n\d\j\c\5\3\9\p\1\2\u\9\7\s\t\z\p\9\e\k\j\0\d\b\c\8\f\j\0\v\c\n\i\6\4\o\m\p\3\9\z\f\6\m\x\q\e\p\k\g\j\9\c\l\s\m\t\1\r\h\9\2\5\w\w\j\l\1\5\h\d\f\s\t\e\d\v\5\u\1\t\o\v\p\p\d\u\u\r\6\t\w\u\e\k\w\d\8\j\q\k\d\y\p\d\c\9\h\6\s\1\j\6\j\f\t\q\7\d\d\s\2\c\w\6\6\m\u\x\7\d\4\y\z\a\p\6\9\t\3\8\k\a\0\4\2\d\4\z\5\3\s\k\k\b\f\9\p\g\x\5\s\q\f\h\g\b\g\u\n\g\d\n\2\v\c\a\l\1\x\d\x\1\u\i\x\5\u\5\j\y\p\z\v\g\u\z\q\z\7\t\9\z\t\r\h\e\1 ]] 00:14:22.104 20:06:04 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:14:22.104 20:06:04 -- dd/uring.sh@69 -- # [[ 66lvzz2zrd0vpum5qkypi9e6tzh71d1vkskc50d7742yalrm08m6vecmrf4y3ug0adm9qdkqpxr819egnc2dpc4g3g9fvtm2za7eaz8vt28w4d01588fnee8hdwupv8fb0dioyq1ok8ctys4zzlhc82bwcb5316gvls66limbbkaw4lfki6b4pc6pl2jn2r9b92u9ch16e5e36yvlhbz8fy3eiginhmori2pwspgnkom87yr0pqyi6c79rsox9e16kg0bkurfnbwj0qv68wntoe5h5jdplqsoba34txm265fvan5dw0281t1wcm6qdpz14u75yaplwftqs8touuz1k4vd43dyxi7fl8z35be157cyj69wisxg2rj280rl5mp47zuskq995h79ndzzzdoh0cxnq1r4o6rieyo4waovi916bnzxfbqr1a2zlfggasxva652e1matbaru1we55wsvw25oovwaubjfr8gyilscf024eptpnt43h8hl5x7qyj78l8ipd48k15riscwprydojxjji9p7emsbyhexnrsyxnhfepna2gr0jon194rqsknnrg3fapc22xz8yeh6abpvr6i4sgx2nijs11ighbo42fggj0bc5zise9m8w9mqegp1bjghwidf9kzoslaue84v4g0exfhdtnh01qavuycf5jykjqbnyywo54kozmn45jxqbepm0n0mbw73yjbg5l5it3ad8yy932nla7g3y3m27s4j0iah5bwslab6x81wb6njg8yer4jo10wlhezoxvs6rd7q8vhdhgsdmlymszm1e7sjx2ndjc539p12u97stzp9ekj0dbc8fj0vcni64omp39zf6mxqepkgj9clsmt1rh925wwjl15hdfstedv5u1tovppduur6twuekwd8jqkdypdc9h6s1j6jftq7dds2cw66mux7d4yzap69t38ka042d4z53skkbf9pgx5sqfhgbgungdn2vcal1xdx1uix5u5jypzvguzqz7t9ztrhe1 == \6\6\l\v\z\z\2\z\r\d\0\v\p\u\m\5\q\k\y\p\i\9\e\6\t\z\h\7\1\d\1\v\k\s\k\c\5\0\d\7\7\4\2\y\a\l\r\m\0\8\m\6\v\e\c\m\r\f\4\y\3\u\g\0\a\d\m\9\q\d\k\q\p\x\r\8\1\9\e\g\n\c\2\d\p\c\4\g\3\g\9\f\v\t\m\2\z\a\7\e\a\z\8\v\t\2\8\w\4\d\0\1\5\8\8\f\n\e\e\8\h\d\w\u\p\v\8\f\b\0\d\i\o\y\q\1\o\k\8\c\t\y\s\4\z\z\l\h\c\8\2\b\w\c\b\5\3\1\6\g\v\l\s\6\6\l\i\m\b\b\k\a\w\4\l\f\k\i\6\b\4\p\c\6\p\l\2\j\n\2\r\9\b\9\2\u\9\c\h\1\6\e\5\e\3\6\y\v\l\h\b\z\8\f\y\3\e\i\g\i\n\h\m\o\r\i\2\p\w\s\p\g\n\k\o\m\8\7\y\r\0\p\q\y\i\6\c\7\9\r\s\o\x\9\e\1\6\k\g\0\b\k\u\r\f\n\b\w\j\0\q\v\6\8\w\n\t\o\e\5\h\5\j\d\p\l\q\s\o\b\a\3\4\t\x\m\2\6\5\f\v\a\n\5\d\w\0\2\8\1\t\1\w\c\m\6\q\d\p\z\1\4\u\7\5\y\a\p\l\w\f\t\q\s\8\t\o\u\u\z\1\k\4\v\d\4\3\d\y\x\i\7\f\l\8\z\3\5\b\e\1\5\7\c\y\j\6\9\w\i\s\x\g\2\r\j\2\8\0\r\l\5\m\p\4\7\z\u\s\k\q\9\9\5\h\7\9\n\d\z\z\z\d\o\h\0\c\x\n\q\1\r\4\o\6\r\i\e\y\o\4\w\a\o\v\i\9\1\6\b\n\z\x\f\b\q\r\1\a\2\z\l\f\g\g\a\s\x\v\a\6\5\2\e\1\m\a\t\b\a\r\u\1\w\e\5\5\w\s\v\w\2\5\o\o\v\w\a\u\b\j\f\r\8\g\y\i\l\s\c\f\0\2\4\e\p\t\p\n\t\4\3\h\8\h\l\5\x\7\q\y\j\7\8\l\8\i\p\d\4\8\k\1\5\r\i\s\c\w\p\r\y\d\o\j\x\j\j\i\9\p\7\e\m\s\b\y\h\e\x\n\r\s\y\x\n\h\f\e\p\n\a\2\g\r\0\j\o\n\1\9\4\r\q\s\k\n\n\r\g\3\f\a\p\c\2\2\x\z\8\y\e\h\6\a\b\p\v\r\6\i\4\s\g\x\2\n\i\j\s\1\1\i\g\h\b\o\4\2\f\g\g\j\0\b\c\5\z\i\s\e\9\m\8\w\9\m\q\e\g\p\1\b\j\g\h\w\i\d\f\9\k\z\o\s\l\a\u\e\8\4\v\4\g\0\e\x\f\h\d\t\n\h\0\1\q\a\v\u\y\c\f\5\j\y\k\j\q\b\n\y\y\w\o\5\4\k\o\z\m\n\4\5\j\x\q\b\e\p\m\0\n\0\m\b\w\7\3\y\j\b\g\5\l\5\i\t\3\a\d\8\y\y\9\3\2\n\l\a\7\g\3\y\3\m\2\7\s\4\j\0\i\a\h\5\b\w\s\l\a\b\6\x\8\1\w\b\6\n\j\g\8\y\e\r\4\j\o\1\0\w\l\h\e\z\o\x\v\s\6\r\d\7\q\8\v\h\d\h\g\s\d\m\l\y\m\s\z\m\1\e\7\s\j\x\2\n\d\j\c\5\3\9\p\1\2\u\9\7\s\t\z\p\9\e\k\j\0\d\b\c\8\f\j\0\v\c\n\i\6\4\o\m\p\3\9\z\f\6\m\x\q\e\p\k\g\j\9\c\l\s\m\t\1\r\h\9\2\5\w\w\j\l\1\5\h\d\f\s\t\e\d\v\5\u\1\t\o\v\p\p\d\u\u\r\6\t\w\u\e\k\w\d\8\j\q\k\d\y\p\d\c\9\h\6\s\1\j\6\j\f\t\q\7\d\d\s\2\c\w\6\6\m\u\x\7\d\4\y\z\a\p\6\9\t\3\8\k\a\0\4\2\d\4\z\5\3\s\k\k\b\f\9\p\g\x\5\s\q\f\h\g\b\g\u\n\g\d\n\2\v\c\a\l\1\x\d\x\1\u\i\x\5\u\5\j\y\p\z\v\g\u\z\q\z\7\t\9\z\t\r\h\e\1 ]] 00:14:22.104 20:06:04 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:14:22.362 20:06:04 -- dd/uring.sh@75 -- # gen_conf 00:14:22.362 20:06:04 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:14:22.362 20:06:04 -- dd/common.sh@31 -- # xtrace_disable 00:14:22.362 20:06:04 -- common/autotest_common.sh@10 -- # set +x 00:14:22.362 [2024-04-24 20:06:04.458708] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:22.362 [2024-04-24 20:06:04.458781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64236 ] 00:14:22.362 { 00:14:22.362 "subsystems": [ 00:14:22.362 { 00:14:22.362 "subsystem": "bdev", 00:14:22.362 "config": [ 00:14:22.362 { 00:14:22.362 "params": { 00:14:22.362 "block_size": 512, 00:14:22.362 "num_blocks": 1048576, 00:14:22.362 "name": "malloc0" 00:14:22.362 }, 00:14:22.362 "method": "bdev_malloc_create" 00:14:22.362 }, 00:14:22.362 { 00:14:22.362 "params": { 00:14:22.362 "filename": "/dev/zram1", 00:14:22.362 "name": "uring0" 00:14:22.362 }, 00:14:22.362 "method": "bdev_uring_create" 00:14:22.362 }, 00:14:22.362 { 00:14:22.362 "method": "bdev_wait_for_examine" 00:14:22.363 } 00:14:22.363 ] 00:14:22.363 } 00:14:22.363 ] 00:14:22.363 } 00:14:22.363 [2024-04-24 20:06:04.595361] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.621 [2024-04-24 20:06:04.694724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.761  Copying: 197/512 [MB] (197 MBps) Copying: 385/512 [MB] (187 MBps) Copying: 512/512 [MB] (average 195 MBps) 00:14:25.761 00:14:25.761 20:06:07 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:14:25.761 20:06:07 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:14:25.761 20:06:07 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:14:25.761 20:06:07 -- dd/uring.sh@87 -- # : 00:14:25.761 20:06:07 -- dd/uring.sh@87 -- # gen_conf 00:14:25.761 20:06:07 -- dd/uring.sh@87 -- # : 00:14:25.761 20:06:07 -- dd/common.sh@31 -- # xtrace_disable 00:14:25.761 20:06:07 -- common/autotest_common.sh@10 -- # set +x 00:14:25.761 [2024-04-24 20:06:07.929861] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:25.761 [2024-04-24 20:06:07.929915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64292 ] 00:14:25.761 { 00:14:25.761 "subsystems": [ 00:14:25.761 { 00:14:25.761 "subsystem": "bdev", 00:14:25.761 "config": [ 00:14:25.761 { 00:14:25.761 "params": { 00:14:25.761 "block_size": 512, 00:14:25.761 "num_blocks": 1048576, 00:14:25.761 "name": "malloc0" 00:14:25.761 }, 00:14:25.761 "method": "bdev_malloc_create" 00:14:25.761 }, 00:14:25.761 { 00:14:25.761 "params": { 00:14:25.761 "filename": "/dev/zram1", 00:14:25.761 "name": "uring0" 00:14:25.761 }, 00:14:25.761 "method": "bdev_uring_create" 00:14:25.761 }, 00:14:25.761 { 00:14:25.761 "params": { 00:14:25.761 "name": "uring0" 00:14:25.761 }, 00:14:25.761 "method": "bdev_uring_delete" 00:14:25.761 }, 00:14:25.761 { 00:14:25.761 "method": "bdev_wait_for_examine" 00:14:25.761 } 00:14:25.761 ] 00:14:25.761 } 00:14:25.761 ] 00:14:25.761 } 00:14:26.030 [2024-04-24 20:06:08.068274] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.030 [2024-04-24 20:06:08.165537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.550  Copying: 0/0 [B] (average 0 Bps) 00:14:26.550 00:14:26.550 20:06:08 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:14:26.550 20:06:08 -- common/autotest_common.sh@638 -- # local es=0 00:14:26.550 20:06:08 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:14:26.550 20:06:08 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:26.550 20:06:08 -- dd/uring.sh@94 -- # : 00:14:26.550 20:06:08 -- dd/uring.sh@94 -- # gen_conf 00:14:26.550 20:06:08 -- dd/common.sh@31 -- # xtrace_disable 00:14:26.550 20:06:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:26.550 20:06:08 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:26.550 20:06:08 -- common/autotest_common.sh@10 -- # set +x 00:14:26.550 20:06:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:26.550 20:06:08 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:26.550 20:06:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:26.550 20:06:08 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:26.550 20:06:08 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:26.550 20:06:08 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:14:26.809 [2024-04-24 20:06:08.803709] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:26.809 [2024-04-24 20:06:08.803781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64313 ] 00:14:26.809 { 00:14:26.809 "subsystems": [ 00:14:26.809 { 00:14:26.809 "subsystem": "bdev", 00:14:26.809 "config": [ 00:14:26.809 { 00:14:26.809 "params": { 00:14:26.809 "block_size": 512, 00:14:26.809 "num_blocks": 1048576, 00:14:26.809 "name": "malloc0" 00:14:26.809 }, 00:14:26.809 "method": "bdev_malloc_create" 00:14:26.809 }, 00:14:26.809 { 00:14:26.809 "params": { 00:14:26.809 "filename": "/dev/zram1", 00:14:26.809 "name": "uring0" 00:14:26.809 }, 00:14:26.809 "method": "bdev_uring_create" 00:14:26.809 }, 00:14:26.809 { 00:14:26.809 "params": { 00:14:26.809 "name": "uring0" 00:14:26.809 }, 00:14:26.809 "method": "bdev_uring_delete" 00:14:26.809 }, 00:14:26.809 { 00:14:26.809 "method": "bdev_wait_for_examine" 00:14:26.809 } 00:14:26.809 ] 00:14:26.809 } 00:14:26.809 ] 00:14:26.809 } 00:14:26.809 [2024-04-24 20:06:09.052957] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.069 [2024-04-24 20:06:09.161335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.329 [2024-04-24 20:06:09.365597] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:14:27.329 [2024-04-24 20:06:09.365721] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:14:27.329 [2024-04-24 20:06:09.365743] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:14:27.329 [2024-04-24 20:06:09.365768] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:27.588 [2024-04-24 20:06:09.618379] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:27.588 20:06:09 -- common/autotest_common.sh@641 -- # es=237 00:14:27.588 20:06:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:27.588 20:06:09 -- common/autotest_common.sh@650 -- # es=109 00:14:27.588 20:06:09 -- common/autotest_common.sh@651 -- # case "$es" in 00:14:27.588 20:06:09 -- common/autotest_common.sh@658 -- # es=1 00:14:27.588 20:06:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:27.588 20:06:09 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:14:27.588 20:06:09 -- dd/common.sh@172 -- # local id=1 00:14:27.588 20:06:09 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:14:27.588 20:06:09 -- dd/common.sh@176 -- # echo 1 00:14:27.588 20:06:09 -- dd/common.sh@177 -- # echo 1 00:14:27.588 20:06:09 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:14:27.847 00:14:27.847 real 0m13.342s 00:14:27.847 user 0m9.246s 00:14:27.847 sys 0m10.581s 00:14:27.847 20:06:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:27.847 20:06:09 -- common/autotest_common.sh@10 -- # set +x 00:14:27.847 ************************************ 00:14:27.847 END TEST dd_uring_copy 00:14:27.847 ************************************ 00:14:27.847 00:14:27.847 real 0m13.633s 00:14:27.847 user 0m9.361s 00:14:27.847 sys 0m10.747s 00:14:27.847 20:06:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:27.847 20:06:10 -- common/autotest_common.sh@10 -- # set +x 00:14:27.847 ************************************ 00:14:27.847 END TEST spdk_dd_uring 00:14:27.847 ************************************ 00:14:27.847 20:06:10 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:14:27.847 20:06:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:27.847 20:06:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:27.847 20:06:10 -- common/autotest_common.sh@10 -- # set +x 00:14:28.106 ************************************ 00:14:28.106 START TEST spdk_dd_sparse 00:14:28.106 ************************************ 00:14:28.106 20:06:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:14:28.106 * Looking for test storage... 00:14:28.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:28.106 20:06:10 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:28.106 20:06:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.106 20:06:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.106 20:06:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.106 20:06:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.106 20:06:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.106 20:06:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.106 20:06:10 -- paths/export.sh@5 -- # export PATH 00:14:28.106 20:06:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.106 20:06:10 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:14:28.106 20:06:10 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:14:28.106 20:06:10 -- dd/sparse.sh@110 -- # file1=file_zero1 00:14:28.106 20:06:10 -- dd/sparse.sh@111 -- # file2=file_zero2 00:14:28.106 20:06:10 -- dd/sparse.sh@112 -- # file3=file_zero3 00:14:28.106 20:06:10 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:14:28.106 20:06:10 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:14:28.106 20:06:10 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:14:28.106 20:06:10 -- dd/sparse.sh@118 -- # prepare 00:14:28.106 20:06:10 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:14:28.106 20:06:10 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:14:28.106 1+0 records in 00:14:28.106 1+0 records out 00:14:28.106 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00933116 s, 449 MB/s 00:14:28.106 20:06:10 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:14:28.106 1+0 records in 00:14:28.106 1+0 records out 00:14:28.106 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00622075 s, 674 MB/s 00:14:28.106 20:06:10 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:14:28.106 1+0 records in 00:14:28.106 1+0 records out 00:14:28.106 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00975321 s, 430 MB/s 00:14:28.106 20:06:10 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:14:28.106 20:06:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:28.106 20:06:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:28.106 20:06:10 -- common/autotest_common.sh@10 -- # set +x 00:14:28.366 ************************************ 00:14:28.366 START TEST dd_sparse_file_to_file 00:14:28.366 ************************************ 00:14:28.366 20:06:10 -- common/autotest_common.sh@1111 -- # file_to_file 00:14:28.366 20:06:10 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:14:28.366 20:06:10 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:14:28.366 20:06:10 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:14:28.366 20:06:10 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:14:28.366 20:06:10 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:14:28.366 20:06:10 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:14:28.366 20:06:10 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:14:28.366 20:06:10 -- dd/sparse.sh@41 -- # gen_conf 00:14:28.366 20:06:10 -- dd/common.sh@31 -- # xtrace_disable 00:14:28.366 20:06:10 -- common/autotest_common.sh@10 -- # set +x 00:14:28.366 [2024-04-24 20:06:10.465288] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:28.366 [2024-04-24 20:06:10.465373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64425 ] 00:14:28.366 { 00:14:28.366 "subsystems": [ 00:14:28.366 { 00:14:28.366 "subsystem": "bdev", 00:14:28.366 "config": [ 00:14:28.366 { 00:14:28.366 "params": { 00:14:28.366 "block_size": 4096, 00:14:28.366 "filename": "dd_sparse_aio_disk", 00:14:28.366 "name": "dd_aio" 00:14:28.366 }, 00:14:28.366 "method": "bdev_aio_create" 00:14:28.366 }, 00:14:28.366 { 00:14:28.366 "params": { 00:14:28.366 "lvs_name": "dd_lvstore", 00:14:28.366 "bdev_name": "dd_aio" 00:14:28.366 }, 00:14:28.366 "method": "bdev_lvol_create_lvstore" 00:14:28.366 }, 00:14:28.366 { 00:14:28.366 "method": "bdev_wait_for_examine" 00:14:28.366 } 00:14:28.366 ] 00:14:28.366 } 00:14:28.366 ] 00:14:28.366 } 00:14:28.366 [2024-04-24 20:06:10.606465] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.625 [2024-04-24 20:06:10.707668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.884  Copying: 12/36 [MB] (average 750 MBps) 00:14:28.884 00:14:28.884 20:06:11 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:14:28.884 20:06:11 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:14:28.884 20:06:11 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:14:28.884 20:06:11 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:14:28.884 20:06:11 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:14:28.884 20:06:11 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:14:28.884 20:06:11 -- dd/sparse.sh@52 -- # stat1_b=24576 00:14:28.884 20:06:11 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:14:28.885 20:06:11 -- dd/sparse.sh@53 -- # stat2_b=24576 00:14:28.885 20:06:11 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:14:28.885 00:14:28.885 real 0m0.697s 00:14:28.885 user 0m0.454s 00:14:28.885 sys 0m0.324s 00:14:28.885 20:06:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:28.885 20:06:11 -- common/autotest_common.sh@10 -- # set +x 00:14:28.885 ************************************ 00:14:28.885 END TEST dd_sparse_file_to_file 00:14:28.885 ************************************ 00:14:29.145 20:06:11 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:14:29.145 20:06:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:29.145 20:06:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.145 20:06:11 -- common/autotest_common.sh@10 -- # set +x 00:14:29.145 ************************************ 00:14:29.145 START TEST dd_sparse_file_to_bdev 00:14:29.145 ************************************ 00:14:29.145 20:06:11 -- common/autotest_common.sh@1111 -- # file_to_bdev 00:14:29.145 20:06:11 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:14:29.145 20:06:11 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:14:29.145 20:06:11 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:14:29.145 20:06:11 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:14:29.145 20:06:11 -- dd/sparse.sh@73 -- # gen_conf 00:14:29.145 20:06:11 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:14:29.145 20:06:11 -- dd/common.sh@31 -- # xtrace_disable 00:14:29.145 20:06:11 -- common/autotest_common.sh@10 -- # set +x 00:14:29.145 [2024-04-24 20:06:11.285079] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:29.145 [2024-04-24 20:06:11.285162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64477 ] 00:14:29.145 { 00:14:29.145 "subsystems": [ 00:14:29.145 { 00:14:29.145 "subsystem": "bdev", 00:14:29.145 "config": [ 00:14:29.145 { 00:14:29.145 "params": { 00:14:29.145 "block_size": 4096, 00:14:29.145 "filename": "dd_sparse_aio_disk", 00:14:29.145 "name": "dd_aio" 00:14:29.145 }, 00:14:29.145 "method": "bdev_aio_create" 00:14:29.145 }, 00:14:29.145 { 00:14:29.145 "params": { 00:14:29.145 "lvs_name": "dd_lvstore", 00:14:29.145 "lvol_name": "dd_lvol", 00:14:29.145 "size": 37748736, 00:14:29.145 "thin_provision": true 00:14:29.145 }, 00:14:29.145 "method": "bdev_lvol_create" 00:14:29.145 }, 00:14:29.145 { 00:14:29.145 "method": "bdev_wait_for_examine" 00:14:29.145 } 00:14:29.145 ] 00:14:29.145 } 00:14:29.145 ] 00:14:29.145 } 00:14:29.404 [2024-04-24 20:06:11.425488] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.404 [2024-04-24 20:06:11.531585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.404 [2024-04-24 20:06:11.620679] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:14:29.663  Copying: 12/36 [MB] (average 500 MBps)[2024-04-24 20:06:11.663027] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:14:29.663 00:14:29.663 00:14:29.663 00:14:29.663 real 0m0.650s 00:14:29.663 user 0m0.449s 00:14:29.663 sys 0m0.284s 00:14:29.663 20:06:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:29.663 20:06:11 -- common/autotest_common.sh@10 -- # set +x 00:14:29.663 ************************************ 00:14:29.663 END TEST dd_sparse_file_to_bdev 00:14:29.663 ************************************ 00:14:29.922 20:06:11 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:14:29.922 20:06:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:29.922 20:06:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.922 20:06:11 -- common/autotest_common.sh@10 -- # set +x 00:14:29.922 ************************************ 00:14:29.922 START TEST dd_sparse_bdev_to_file 00:14:29.922 ************************************ 00:14:29.922 20:06:12 -- common/autotest_common.sh@1111 -- # bdev_to_file 00:14:29.922 20:06:12 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:14:29.922 20:06:12 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:14:29.922 20:06:12 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:14:29.922 20:06:12 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:14:29.922 20:06:12 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:14:29.922 20:06:12 -- dd/sparse.sh@91 -- # gen_conf 00:14:29.922 20:06:12 -- dd/common.sh@31 -- # xtrace_disable 00:14:29.922 20:06:12 -- common/autotest_common.sh@10 -- # set +x 00:14:29.922 [2024-04-24 20:06:12.095232] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:29.922 [2024-04-24 20:06:12.095301] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64519 ] 00:14:29.922 { 00:14:29.922 "subsystems": [ 00:14:29.922 { 00:14:29.922 "subsystem": "bdev", 00:14:29.922 "config": [ 00:14:29.922 { 00:14:29.922 "params": { 00:14:29.922 "block_size": 4096, 00:14:29.922 "filename": "dd_sparse_aio_disk", 00:14:29.922 "name": "dd_aio" 00:14:29.922 }, 00:14:29.922 "method": "bdev_aio_create" 00:14:29.922 }, 00:14:29.922 { 00:14:29.922 "method": "bdev_wait_for_examine" 00:14:29.922 } 00:14:29.922 ] 00:14:29.922 } 00:14:29.922 ] 00:14:29.922 } 00:14:30.182 [2024-04-24 20:06:12.232969] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.182 [2024-04-24 20:06:12.336645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.441  Copying: 12/36 [MB] (average 705 MBps) 00:14:30.441 00:14:30.441 20:06:12 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:14:30.441 20:06:12 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:14:30.441 20:06:12 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:14:30.701 20:06:12 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:14:30.701 20:06:12 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:14:30.701 20:06:12 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:14:30.701 20:06:12 -- dd/sparse.sh@102 -- # stat2_b=24576 00:14:30.701 20:06:12 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:14:30.701 20:06:12 -- dd/sparse.sh@103 -- # stat3_b=24576 00:14:30.701 20:06:12 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:14:30.701 00:14:30.701 real 0m0.667s 00:14:30.701 user 0m0.442s 00:14:30.701 sys 0m0.307s 00:14:30.701 20:06:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:30.701 20:06:12 -- common/autotest_common.sh@10 -- # set +x 00:14:30.701 ************************************ 00:14:30.701 END TEST dd_sparse_bdev_to_file 00:14:30.701 ************************************ 00:14:30.701 20:06:12 -- dd/sparse.sh@1 -- # cleanup 00:14:30.701 20:06:12 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:14:30.701 20:06:12 -- dd/sparse.sh@12 -- # rm file_zero1 00:14:30.701 20:06:12 -- dd/sparse.sh@13 -- # rm file_zero2 00:14:30.701 20:06:12 -- dd/sparse.sh@14 -- # rm file_zero3 00:14:30.701 00:14:30.701 real 0m2.635s 00:14:30.701 user 0m1.550s 00:14:30.701 sys 0m1.306s 00:14:30.701 20:06:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:30.701 20:06:12 -- common/autotest_common.sh@10 -- # set +x 00:14:30.701 ************************************ 00:14:30.701 END TEST spdk_dd_sparse 00:14:30.701 ************************************ 00:14:30.701 20:06:12 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:14:30.701 20:06:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:30.701 20:06:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:30.701 20:06:12 -- common/autotest_common.sh@10 -- # set +x 00:14:30.960 ************************************ 00:14:30.960 START TEST spdk_dd_negative 00:14:30.960 ************************************ 00:14:30.960 20:06:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:14:30.960 * Looking for test storage... 00:14:30.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:30.960 20:06:13 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:30.960 20:06:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.960 20:06:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.960 20:06:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.960 20:06:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.960 20:06:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.960 20:06:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.960 20:06:13 -- paths/export.sh@5 -- # export PATH 00:14:30.960 20:06:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.960 20:06:13 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:30.960 20:06:13 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:30.960 20:06:13 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:30.960 20:06:13 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:30.960 20:06:13 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:14:30.960 20:06:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:30.960 20:06:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:30.960 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:14:30.960 ************************************ 00:14:30.960 START TEST dd_invalid_arguments 00:14:30.960 ************************************ 00:14:30.960 20:06:13 -- common/autotest_common.sh@1111 -- # invalid_arguments 00:14:30.960 20:06:13 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:14:30.960 20:06:13 -- common/autotest_common.sh@638 -- # local es=0 00:14:30.960 20:06:13 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:14:30.960 20:06:13 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:30.960 20:06:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:30.960 20:06:13 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:30.960 20:06:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:30.960 20:06:13 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:30.960 20:06:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:30.960 20:06:13 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:30.960 20:06:13 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:30.960 20:06:13 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:14:31.219 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:14:31.219 00:14:31.219 CPU options: 00:14:31.219 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:14:31.219 (like [0,1,10]) 00:14:31.219 --lcores lcore to CPU mapping list. The list is in the format: 00:14:31.219 [<,lcores[@CPUs]>...] 00:14:31.219 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:14:31.219 Within the group, '-' is used for range separator, 00:14:31.219 ',' is used for single number separator. 00:14:31.219 '( )' can be omitted for single element group, 00:14:31.219 '@' can be omitted if cpus and lcores have the same value 00:14:31.219 --disable-cpumask-locks Disable CPU core lock files. 00:14:31.219 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:14:31.219 pollers in the app support interrupt mode) 00:14:31.219 -p, --main-core main (primary) core for DPDK 00:14:31.219 00:14:31.219 Configuration options: 00:14:31.219 -c, --config, --json JSON config file 00:14:31.219 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:14:31.219 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:14:31.219 --wait-for-rpc wait for RPCs to initialize subsystems 00:14:31.219 --rpcs-allowed comma-separated list of permitted RPCS 00:14:31.219 --json-ignore-init-errors don't exit on invalid config entry 00:14:31.219 00:14:31.219 Memory options: 00:14:31.219 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:14:31.219 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:14:31.219 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:14:31.219 -R, --huge-unlink unlink huge files after initialization 00:14:31.219 -n, --mem-channels number of memory channels used for DPDK 00:14:31.219 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:14:31.219 --msg-mempool-size global message memory pool size in count (default: 262143) 00:14:31.219 --no-huge run without using hugepages 00:14:31.219 -i, --shm-id shared memory ID (optional) 00:14:31.219 -g, --single-file-segments force creating just one hugetlbfs file 00:14:31.219 00:14:31.219 PCI options: 00:14:31.219 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:14:31.219 -B, --pci-blocked pci addr to block (can be used more than once) 00:14:31.219 -u, --no-pci disable PCI access 00:14:31.219 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:14:31.219 00:14:31.219 Log options: 00:14:31.219 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:14:31.219 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:14:31.219 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:14:31.219 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:14:31.219 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:14:31.219 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:14:31.219 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:14:31.219 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:14:31.220 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:14:31.220 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:14:31.220 virtio_vfio_user, vmd) 00:14:31.220 --silence-noticelog disable notice level logging to stderr 00:14:31.220 00:14:31.220 Trace options: 00:14:31.220 --num-trace-entries number of trace entries for each core, must be power of 2, 00:14:31.220 setting 0 to disable trace (default 32768) 00:14:31.220 Tracepoints vary in size and can use more than one trace entry. 00:14:31.220 -e, --tpoint-group [:] 00:14:31.220 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:14:31.220 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:14:31.220 [2024-04-24 20:06:13.251528] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:14:31.220 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:14:31.220 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:14:31.220 a tracepoint group. First tpoint inside a group can be enabled by 00:14:31.220 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:14:31.220 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:14:31.220 in /include/spdk_internal/trace_defs.h 00:14:31.220 00:14:31.220 Other options: 00:14:31.220 -h, --help show this usage 00:14:31.220 -v, --version print SPDK version 00:14:31.220 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:14:31.220 --env-context Opaque context for use of the env implementation 00:14:31.220 00:14:31.220 Application specific: 00:14:31.220 [--------- DD Options ---------] 00:14:31.220 --if Input file. Must specify either --if or --ib. 00:14:31.220 --ib Input bdev. Must specifier either --if or --ib 00:14:31.220 --of Output file. Must specify either --of or --ob. 00:14:31.220 --ob Output bdev. Must specify either --of or --ob. 00:14:31.220 --iflag Input file flags. 00:14:31.220 --oflag Output file flags. 00:14:31.220 --bs I/O unit size (default: 4096) 00:14:31.220 --qd Queue depth (default: 2) 00:14:31.220 --count I/O unit count. The number of I/O units to copy. (default: all) 00:14:31.220 --skip Skip this many I/O units at start of input. (default: 0) 00:14:31.220 --seek Skip this many I/O units at start of output. (default: 0) 00:14:31.220 --aio Force usage of AIO. (by default io_uring is used if available) 00:14:31.220 --sparse Enable hole skipping in input target 00:14:31.220 Available iflag and oflag values: 00:14:31.220 append - append mode 00:14:31.220 direct - use direct I/O for data 00:14:31.220 directory - fail unless a directory 00:14:31.220 dsync - use synchronized I/O for data 00:14:31.220 noatime - do not update access time 00:14:31.220 noctty - do not assign controlling terminal from file 00:14:31.220 nofollow - do not follow symlinks 00:14:31.220 nonblock - use non-blocking I/O 00:14:31.220 sync - use synchronized I/O for data and metadata 00:14:31.220 20:06:13 -- common/autotest_common.sh@641 -- # es=2 00:14:31.220 20:06:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:31.220 20:06:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:31.220 20:06:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:31.220 00:14:31.220 real 0m0.067s 00:14:31.220 user 0m0.033s 00:14:31.220 sys 0m0.033s 00:14:31.220 20:06:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:31.220 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:14:31.220 ************************************ 00:14:31.220 END TEST dd_invalid_arguments 00:14:31.220 ************************************ 00:14:31.220 20:06:13 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:14:31.220 20:06:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:31.220 20:06:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.220 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:14:31.220 ************************************ 00:14:31.220 START TEST dd_double_input 00:14:31.220 ************************************ 00:14:31.220 20:06:13 -- common/autotest_common.sh@1111 -- # double_input 00:14:31.220 20:06:13 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:14:31.220 20:06:13 -- common/autotest_common.sh@638 -- # local es=0 00:14:31.220 20:06:13 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:14:31.220 20:06:13 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.220 20:06:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.220 20:06:13 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.220 20:06:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.220 20:06:13 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.220 20:06:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.220 20:06:13 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.220 20:06:13 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:31.220 20:06:13 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:14:31.220 [2024-04-24 20:06:13.464300] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:14:31.479 20:06:13 -- common/autotest_common.sh@641 -- # es=22 00:14:31.479 20:06:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:31.479 20:06:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:31.479 20:06:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:31.479 00:14:31.479 real 0m0.072s 00:14:31.479 user 0m0.037s 00:14:31.479 sys 0m0.034s 00:14:31.479 20:06:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:31.479 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:14:31.479 ************************************ 00:14:31.479 END TEST dd_double_input 00:14:31.479 ************************************ 00:14:31.479 20:06:13 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:14:31.479 20:06:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:31.479 20:06:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.479 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:14:31.479 ************************************ 00:14:31.479 START TEST dd_double_output 00:14:31.479 ************************************ 00:14:31.479 20:06:13 -- common/autotest_common.sh@1111 -- # double_output 00:14:31.479 20:06:13 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:14:31.479 20:06:13 -- common/autotest_common.sh@638 -- # local es=0 00:14:31.479 20:06:13 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:14:31.479 20:06:13 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.479 20:06:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.479 20:06:13 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.479 20:06:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.479 20:06:13 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.479 20:06:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.479 20:06:13 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.479 20:06:13 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:31.479 20:06:13 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:14:31.479 [2024-04-24 20:06:13.680421] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:14:31.479 20:06:13 -- common/autotest_common.sh@641 -- # es=22 00:14:31.479 20:06:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:31.479 20:06:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:31.479 20:06:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:31.479 00:14:31.479 real 0m0.069s 00:14:31.479 user 0m0.043s 00:14:31.479 sys 0m0.025s 00:14:31.479 20:06:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:31.479 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:14:31.479 ************************************ 00:14:31.479 END TEST dd_double_output 00:14:31.479 ************************************ 00:14:31.739 20:06:13 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:14:31.739 20:06:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:31.739 20:06:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.739 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:14:31.739 ************************************ 00:14:31.739 START TEST dd_no_input 00:14:31.739 ************************************ 00:14:31.739 20:06:13 -- common/autotest_common.sh@1111 -- # no_input 00:14:31.739 20:06:13 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:14:31.739 20:06:13 -- common/autotest_common.sh@638 -- # local es=0 00:14:31.739 20:06:13 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:14:31.739 20:06:13 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.739 20:06:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.739 20:06:13 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.739 20:06:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.739 20:06:13 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.739 20:06:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.739 20:06:13 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.739 20:06:13 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:31.739 20:06:13 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:14:31.739 [2024-04-24 20:06:13.896523] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:14:31.739 20:06:13 -- common/autotest_common.sh@641 -- # es=22 00:14:31.739 20:06:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:31.739 20:06:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:31.739 20:06:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:31.739 00:14:31.739 real 0m0.075s 00:14:31.739 user 0m0.043s 00:14:31.739 sys 0m0.031s 00:14:31.739 20:06:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:31.739 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:14:31.739 ************************************ 00:14:31.739 END TEST dd_no_input 00:14:31.739 ************************************ 00:14:31.739 20:06:13 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:14:31.739 20:06:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:31.739 20:06:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.739 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:14:31.999 ************************************ 00:14:31.999 START TEST dd_no_output 00:14:31.999 ************************************ 00:14:31.999 20:06:14 -- common/autotest_common.sh@1111 -- # no_output 00:14:31.999 20:06:14 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:31.999 20:06:14 -- common/autotest_common.sh@638 -- # local es=0 00:14:31.999 20:06:14 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:31.999 20:06:14 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.999 20:06:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.999 20:06:14 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.999 20:06:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.999 20:06:14 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.999 20:06:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:31.999 20:06:14 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:31.999 20:06:14 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:31.999 20:06:14 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:31.999 [2024-04-24 20:06:14.105871] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:14:31.999 20:06:14 -- common/autotest_common.sh@641 -- # es=22 00:14:31.999 20:06:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:31.999 20:06:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:31.999 20:06:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:31.999 00:14:31.999 real 0m0.072s 00:14:31.999 user 0m0.043s 00:14:31.999 sys 0m0.027s 00:14:31.999 20:06:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:31.999 20:06:14 -- common/autotest_common.sh@10 -- # set +x 00:14:31.999 ************************************ 00:14:31.999 END TEST dd_no_output 00:14:31.999 ************************************ 00:14:31.999 20:06:14 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:14:31.999 20:06:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:31.999 20:06:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.999 20:06:14 -- common/autotest_common.sh@10 -- # set +x 00:14:32.259 ************************************ 00:14:32.259 START TEST dd_wrong_blocksize 00:14:32.259 ************************************ 00:14:32.259 20:06:14 -- common/autotest_common.sh@1111 -- # wrong_blocksize 00:14:32.259 20:06:14 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:14:32.259 20:06:14 -- common/autotest_common.sh@638 -- # local es=0 00:14:32.259 20:06:14 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:14:32.259 20:06:14 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.259 20:06:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.259 20:06:14 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.259 20:06:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.259 20:06:14 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.259 20:06:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.259 20:06:14 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.259 20:06:14 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:32.259 20:06:14 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:14:32.259 [2024-04-24 20:06:14.323239] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:14:32.259 20:06:14 -- common/autotest_common.sh@641 -- # es=22 00:14:32.259 20:06:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:32.259 20:06:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:32.259 20:06:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:32.259 00:14:32.259 real 0m0.071s 00:14:32.259 user 0m0.041s 00:14:32.259 sys 0m0.029s 00:14:32.259 20:06:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:32.259 20:06:14 -- common/autotest_common.sh@10 -- # set +x 00:14:32.259 ************************************ 00:14:32.259 END TEST dd_wrong_blocksize 00:14:32.259 ************************************ 00:14:32.259 20:06:14 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:14:32.259 20:06:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:32.259 20:06:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.259 20:06:14 -- common/autotest_common.sh@10 -- # set +x 00:14:32.259 ************************************ 00:14:32.259 START TEST dd_smaller_blocksize 00:14:32.259 ************************************ 00:14:32.259 20:06:14 -- common/autotest_common.sh@1111 -- # smaller_blocksize 00:14:32.259 20:06:14 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:14:32.259 20:06:14 -- common/autotest_common.sh@638 -- # local es=0 00:14:32.259 20:06:14 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:14:32.259 20:06:14 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.259 20:06:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.259 20:06:14 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.259 20:06:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.259 20:06:14 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.259 20:06:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:32.259 20:06:14 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:32.259 20:06:14 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:32.259 20:06:14 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:14:32.519 [2024-04-24 20:06:14.528430] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:32.519 [2024-04-24 20:06:14.528499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64779 ] 00:14:32.519 [2024-04-24 20:06:14.667044] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.519 [2024-04-24 20:06:14.769401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.087 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:14:33.087 [2024-04-24 20:06:15.069589] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:14:33.087 [2024-04-24 20:06:15.069672] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:33.087 [2024-04-24 20:06:15.164857] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:33.087 20:06:15 -- common/autotest_common.sh@641 -- # es=244 00:14:33.087 20:06:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:33.087 20:06:15 -- common/autotest_common.sh@650 -- # es=116 00:14:33.087 20:06:15 -- common/autotest_common.sh@651 -- # case "$es" in 00:14:33.087 20:06:15 -- common/autotest_common.sh@658 -- # es=1 00:14:33.087 20:06:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:33.087 00:14:33.087 real 0m0.806s 00:14:33.087 user 0m0.395s 00:14:33.087 sys 0m0.305s 00:14:33.087 20:06:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:33.087 20:06:15 -- common/autotest_common.sh@10 -- # set +x 00:14:33.087 ************************************ 00:14:33.087 END TEST dd_smaller_blocksize 00:14:33.087 ************************************ 00:14:33.087 20:06:15 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:14:33.087 20:06:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:33.087 20:06:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.087 20:06:15 -- common/autotest_common.sh@10 -- # set +x 00:14:33.347 ************************************ 00:14:33.347 START TEST dd_invalid_count 00:14:33.347 ************************************ 00:14:33.347 20:06:15 -- common/autotest_common.sh@1111 -- # invalid_count 00:14:33.347 20:06:15 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:14:33.347 20:06:15 -- common/autotest_common.sh@638 -- # local es=0 00:14:33.347 20:06:15 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:14:33.347 20:06:15 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.347 20:06:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.347 20:06:15 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.347 20:06:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.347 20:06:15 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.347 20:06:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.347 20:06:15 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.347 20:06:15 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:33.347 20:06:15 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:14:33.347 [2024-04-24 20:06:15.490720] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:14:33.347 20:06:15 -- common/autotest_common.sh@641 -- # es=22 00:14:33.347 20:06:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:33.347 20:06:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:33.347 20:06:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:33.347 00:14:33.347 real 0m0.070s 00:14:33.347 user 0m0.043s 00:14:33.347 sys 0m0.026s 00:14:33.347 20:06:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:33.347 20:06:15 -- common/autotest_common.sh@10 -- # set +x 00:14:33.347 ************************************ 00:14:33.347 END TEST dd_invalid_count 00:14:33.347 ************************************ 00:14:33.347 20:06:15 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:14:33.347 20:06:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:33.347 20:06:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.347 20:06:15 -- common/autotest_common.sh@10 -- # set +x 00:14:33.606 ************************************ 00:14:33.606 START TEST dd_invalid_oflag 00:14:33.606 ************************************ 00:14:33.606 20:06:15 -- common/autotest_common.sh@1111 -- # invalid_oflag 00:14:33.606 20:06:15 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:14:33.606 20:06:15 -- common/autotest_common.sh@638 -- # local es=0 00:14:33.606 20:06:15 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:14:33.606 20:06:15 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.606 20:06:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.606 20:06:15 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.606 20:06:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.606 20:06:15 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.606 20:06:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.606 20:06:15 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.606 20:06:15 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:33.606 20:06:15 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:14:33.606 [2024-04-24 20:06:15.710074] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:14:33.606 20:06:15 -- common/autotest_common.sh@641 -- # es=22 00:14:33.606 20:06:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:33.606 20:06:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:33.606 20:06:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:33.606 00:14:33.606 real 0m0.072s 00:14:33.606 user 0m0.041s 00:14:33.606 sys 0m0.030s 00:14:33.606 20:06:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:33.606 20:06:15 -- common/autotest_common.sh@10 -- # set +x 00:14:33.606 ************************************ 00:14:33.606 END TEST dd_invalid_oflag 00:14:33.606 ************************************ 00:14:33.606 20:06:15 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:14:33.606 20:06:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:33.606 20:06:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.606 20:06:15 -- common/autotest_common.sh@10 -- # set +x 00:14:33.606 ************************************ 00:14:33.606 START TEST dd_invalid_iflag 00:14:33.606 ************************************ 00:14:33.606 20:06:15 -- common/autotest_common.sh@1111 -- # invalid_iflag 00:14:33.606 20:06:15 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:14:33.606 20:06:15 -- common/autotest_common.sh@638 -- # local es=0 00:14:33.865 20:06:15 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:14:33.866 20:06:15 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.866 20:06:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.866 20:06:15 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.866 20:06:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.866 20:06:15 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.866 20:06:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.866 20:06:15 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.866 20:06:15 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:33.866 20:06:15 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:14:33.866 [2024-04-24 20:06:15.915054] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:14:33.866 20:06:15 -- common/autotest_common.sh@641 -- # es=22 00:14:33.866 20:06:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:33.866 20:06:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:33.866 20:06:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:33.866 00:14:33.866 real 0m0.070s 00:14:33.866 user 0m0.042s 00:14:33.866 sys 0m0.027s 00:14:33.866 20:06:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:33.866 20:06:15 -- common/autotest_common.sh@10 -- # set +x 00:14:33.866 ************************************ 00:14:33.866 END TEST dd_invalid_iflag 00:14:33.866 ************************************ 00:14:33.866 20:06:15 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:14:33.866 20:06:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:33.866 20:06:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.866 20:06:15 -- common/autotest_common.sh@10 -- # set +x 00:14:33.866 ************************************ 00:14:33.866 START TEST dd_unknown_flag 00:14:33.866 ************************************ 00:14:33.866 20:06:16 -- common/autotest_common.sh@1111 -- # unknown_flag 00:14:33.866 20:06:16 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:14:33.866 20:06:16 -- common/autotest_common.sh@638 -- # local es=0 00:14:33.866 20:06:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:14:33.866 20:06:16 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.866 20:06:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.866 20:06:16 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.866 20:06:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.866 20:06:16 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.866 20:06:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:33.866 20:06:16 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:33.866 20:06:16 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:33.866 20:06:16 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:14:34.125 [2024-04-24 20:06:16.133509] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:34.125 [2024-04-24 20:06:16.133589] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64893 ] 00:14:34.125 [2024-04-24 20:06:16.273505] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.383 [2024-04-24 20:06:16.379539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.383 [2024-04-24 20:06:16.451161] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:14:34.383 [2024-04-24 20:06:16.451223] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:34.383 [2024-04-24 20:06:16.451271] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:14:34.383 [2024-04-24 20:06:16.451279] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:34.383 [2024-04-24 20:06:16.451513] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:14:34.383 [2024-04-24 20:06:16.451525] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:34.383 [2024-04-24 20:06:16.451571] app.c: 953:app_stop: *NOTICE*: spdk_app_stop called twice 00:14:34.383 [2024-04-24 20:06:16.451578] app.c: 953:app_stop: *NOTICE*: spdk_app_stop called twice 00:14:34.383 [2024-04-24 20:06:16.549175] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:34.642 20:06:16 -- common/autotest_common.sh@641 -- # es=234 00:14:34.642 20:06:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:34.642 20:06:16 -- common/autotest_common.sh@650 -- # es=106 00:14:34.642 20:06:16 -- common/autotest_common.sh@651 -- # case "$es" in 00:14:34.642 20:06:16 -- common/autotest_common.sh@658 -- # es=1 00:14:34.642 20:06:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:34.642 00:14:34.642 real 0m0.594s 00:14:34.642 user 0m0.363s 00:14:34.642 sys 0m0.133s 00:14:34.642 20:06:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:34.642 20:06:16 -- common/autotest_common.sh@10 -- # set +x 00:14:34.642 ************************************ 00:14:34.642 END TEST dd_unknown_flag 00:14:34.642 ************************************ 00:14:34.642 20:06:16 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:14:34.642 20:06:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:34.642 20:06:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:34.642 20:06:16 -- common/autotest_common.sh@10 -- # set +x 00:14:34.642 ************************************ 00:14:34.642 START TEST dd_invalid_json 00:14:34.642 ************************************ 00:14:34.642 20:06:16 -- common/autotest_common.sh@1111 -- # invalid_json 00:14:34.642 20:06:16 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:14:34.642 20:06:16 -- dd/negative_dd.sh@95 -- # : 00:14:34.642 20:06:16 -- common/autotest_common.sh@638 -- # local es=0 00:14:34.642 20:06:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:14:34.642 20:06:16 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:34.642 20:06:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:34.642 20:06:16 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:34.642 20:06:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:34.642 20:06:16 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:34.642 20:06:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:34.642 20:06:16 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:34.642 20:06:16 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:34.642 20:06:16 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:14:34.642 [2024-04-24 20:06:16.866835] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:34.643 [2024-04-24 20:06:16.866917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64925 ] 00:14:34.901 [2024-04-24 20:06:17.006281] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.901 [2024-04-24 20:06:17.110967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.901 [2024-04-24 20:06:17.111043] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:14:34.901 [2024-04-24 20:06:17.111055] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:34.901 [2024-04-24 20:06:17.111061] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:34.901 [2024-04-24 20:06:17.111094] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:35.161 20:06:17 -- common/autotest_common.sh@641 -- # es=234 00:14:35.161 20:06:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:35.161 20:06:17 -- common/autotest_common.sh@650 -- # es=106 00:14:35.161 20:06:17 -- common/autotest_common.sh@651 -- # case "$es" in 00:14:35.161 20:06:17 -- common/autotest_common.sh@658 -- # es=1 00:14:35.161 20:06:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:35.161 00:14:35.161 real 0m0.430s 00:14:35.161 user 0m0.257s 00:14:35.161 sys 0m0.071s 00:14:35.161 20:06:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:35.161 20:06:17 -- common/autotest_common.sh@10 -- # set +x 00:14:35.161 ************************************ 00:14:35.161 END TEST dd_invalid_json 00:14:35.161 ************************************ 00:14:35.161 00:14:35.161 real 0m4.333s 00:14:35.161 user 0m2.034s 00:14:35.161 sys 0m1.841s 00:14:35.161 20:06:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:35.161 20:06:17 -- common/autotest_common.sh@10 -- # set +x 00:14:35.161 ************************************ 00:14:35.161 END TEST spdk_dd_negative 00:14:35.161 ************************************ 00:14:35.161 00:14:35.161 real 1m15.913s 00:14:35.161 user 0m49.048s 00:14:35.161 sys 0m30.842s 00:14:35.161 20:06:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:35.161 20:06:17 -- common/autotest_common.sh@10 -- # set +x 00:14:35.161 ************************************ 00:14:35.161 END TEST spdk_dd 00:14:35.161 ************************************ 00:14:35.161 20:06:17 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:14:35.161 20:06:17 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:14:35.161 20:06:17 -- spdk/autotest.sh@258 -- # timing_exit lib 00:14:35.161 20:06:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:35.161 20:06:17 -- common/autotest_common.sh@10 -- # set +x 00:14:35.421 20:06:17 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:14:35.421 20:06:17 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:14:35.421 20:06:17 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:14:35.421 20:06:17 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:14:35.421 20:06:17 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:14:35.421 20:06:17 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:14:35.421 20:06:17 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:14:35.421 20:06:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:35.421 20:06:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:35.421 20:06:17 -- common/autotest_common.sh@10 -- # set +x 00:14:35.421 ************************************ 00:14:35.421 START TEST nvmf_tcp 00:14:35.421 ************************************ 00:14:35.421 20:06:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:14:35.421 * Looking for test storage... 00:14:35.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:35.421 20:06:17 -- nvmf/nvmf.sh@10 -- # uname -s 00:14:35.421 20:06:17 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:14:35.421 20:06:17 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:35.421 20:06:17 -- nvmf/common.sh@7 -- # uname -s 00:14:35.421 20:06:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.421 20:06:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.421 20:06:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.421 20:06:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.421 20:06:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.421 20:06:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.421 20:06:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.421 20:06:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.421 20:06:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.421 20:06:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.421 20:06:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:14:35.421 20:06:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:14:35.421 20:06:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.421 20:06:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.421 20:06:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:35.421 20:06:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.421 20:06:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:35.421 20:06:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.421 20:06:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.421 20:06:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.421 20:06:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.421 20:06:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.421 20:06:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.421 20:06:17 -- paths/export.sh@5 -- # export PATH 00:14:35.421 20:06:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.421 20:06:17 -- nvmf/common.sh@47 -- # : 0 00:14:35.421 20:06:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.421 20:06:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.421 20:06:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.421 20:06:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.421 20:06:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.421 20:06:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.421 20:06:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.421 20:06:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.421 20:06:17 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:35.421 20:06:17 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:14:35.421 20:06:17 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:14:35.421 20:06:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:35.421 20:06:17 -- common/autotest_common.sh@10 -- # set +x 00:14:35.682 20:06:17 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:14:35.683 20:06:17 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:35.683 20:06:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:35.683 20:06:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:35.683 20:06:17 -- common/autotest_common.sh@10 -- # set +x 00:14:35.683 ************************************ 00:14:35.683 START TEST nvmf_host_management 00:14:35.683 ************************************ 00:14:35.683 20:06:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:35.683 * Looking for test storage... 00:14:35.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:35.683 20:06:17 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:35.683 20:06:17 -- nvmf/common.sh@7 -- # uname -s 00:14:35.683 20:06:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.683 20:06:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.683 20:06:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.683 20:06:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.683 20:06:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.683 20:06:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.683 20:06:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.683 20:06:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.683 20:06:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.683 20:06:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.683 20:06:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:14:35.683 20:06:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:14:35.683 20:06:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.683 20:06:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.683 20:06:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:35.683 20:06:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.683 20:06:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:35.683 20:06:17 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.683 20:06:17 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.683 20:06:17 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.683 20:06:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.683 20:06:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.683 20:06:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.683 20:06:17 -- paths/export.sh@5 -- # export PATH 00:14:35.683 20:06:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.683 20:06:17 -- nvmf/common.sh@47 -- # : 0 00:14:35.683 20:06:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.683 20:06:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.683 20:06:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.683 20:06:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.683 20:06:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.683 20:06:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.683 20:06:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.683 20:06:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.943 20:06:17 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:35.943 20:06:17 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:35.943 20:06:17 -- target/host_management.sh@105 -- # nvmftestinit 00:14:35.943 20:06:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:35.943 20:06:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.943 20:06:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:35.943 20:06:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:35.943 20:06:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:35.943 20:06:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.943 20:06:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.943 20:06:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.943 20:06:17 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:35.943 20:06:17 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:35.943 20:06:17 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:35.943 20:06:17 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:35.943 20:06:17 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:35.943 20:06:17 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:35.943 20:06:17 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.943 20:06:17 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.943 20:06:17 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:35.943 20:06:17 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:35.943 20:06:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:35.943 20:06:17 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:35.943 20:06:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:35.943 20:06:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.943 20:06:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:35.943 20:06:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:35.943 20:06:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:35.943 20:06:17 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:35.943 20:06:17 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:35.943 Cannot find device "nvmf_init_br" 00:14:35.943 20:06:17 -- nvmf/common.sh@154 -- # true 00:14:35.943 20:06:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:35.943 Cannot find device "nvmf_tgt_br" 00:14:35.943 20:06:17 -- nvmf/common.sh@155 -- # true 00:14:35.943 20:06:17 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:35.943 Cannot find device "nvmf_tgt_br2" 00:14:35.943 20:06:17 -- nvmf/common.sh@156 -- # true 00:14:35.943 20:06:17 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:35.943 Cannot find device "nvmf_init_br" 00:14:35.943 20:06:18 -- nvmf/common.sh@157 -- # true 00:14:35.943 20:06:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:35.943 Cannot find device "nvmf_tgt_br" 00:14:35.943 20:06:18 -- nvmf/common.sh@158 -- # true 00:14:35.943 20:06:18 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:35.943 Cannot find device "nvmf_tgt_br2" 00:14:35.943 20:06:18 -- nvmf/common.sh@159 -- # true 00:14:35.943 20:06:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:35.943 Cannot find device "nvmf_br" 00:14:35.943 20:06:18 -- nvmf/common.sh@160 -- # true 00:14:35.943 20:06:18 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:35.943 Cannot find device "nvmf_init_if" 00:14:35.943 20:06:18 -- nvmf/common.sh@161 -- # true 00:14:35.943 20:06:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:35.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.943 20:06:18 -- nvmf/common.sh@162 -- # true 00:14:35.943 20:06:18 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:35.943 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.943 20:06:18 -- nvmf/common.sh@163 -- # true 00:14:35.943 20:06:18 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:35.943 20:06:18 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:35.943 20:06:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:35.943 20:06:18 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:35.943 20:06:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:35.943 20:06:18 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:35.943 20:06:18 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:35.943 20:06:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:35.943 20:06:18 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:35.943 20:06:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:35.943 20:06:18 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:35.943 20:06:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:35.943 20:06:18 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:35.943 20:06:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:35.943 20:06:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:35.943 20:06:18 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:35.943 20:06:18 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:36.203 20:06:18 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:36.203 20:06:18 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:36.203 20:06:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:36.203 20:06:18 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:36.203 20:06:18 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:36.203 20:06:18 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:36.203 20:06:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:36.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:14:36.203 00:14:36.203 --- 10.0.0.2 ping statistics --- 00:14:36.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.203 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:14:36.203 20:06:18 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:36.203 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:36.203 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:14:36.203 00:14:36.203 --- 10.0.0.3 ping statistics --- 00:14:36.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.203 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:36.204 20:06:18 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:36.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:14:36.204 00:14:36.204 --- 10.0.0.1 ping statistics --- 00:14:36.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.204 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:36.204 20:06:18 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.204 20:06:18 -- nvmf/common.sh@422 -- # return 0 00:14:36.204 20:06:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:36.204 20:06:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.204 20:06:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:36.204 20:06:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:36.204 20:06:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.204 20:06:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:36.204 20:06:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:36.204 20:06:18 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:14:36.204 20:06:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:36.204 20:06:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:36.204 20:06:18 -- common/autotest_common.sh@10 -- # set +x 00:14:36.204 ************************************ 00:14:36.204 START TEST nvmf_host_management 00:14:36.204 ************************************ 00:14:36.204 20:06:18 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:14:36.204 20:06:18 -- target/host_management.sh@69 -- # starttarget 00:14:36.204 20:06:18 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:36.204 20:06:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:36.204 20:06:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:36.204 20:06:18 -- common/autotest_common.sh@10 -- # set +x 00:14:36.204 20:06:18 -- nvmf/common.sh@470 -- # nvmfpid=65204 00:14:36.204 20:06:18 -- nvmf/common.sh@471 -- # waitforlisten 65204 00:14:36.204 20:06:18 -- common/autotest_common.sh@817 -- # '[' -z 65204 ']' 00:14:36.204 20:06:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.204 20:06:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:36.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.204 20:06:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.204 20:06:18 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:36.204 20:06:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:36.204 20:06:18 -- common/autotest_common.sh@10 -- # set +x 00:14:36.462 [2024-04-24 20:06:18.476007] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:36.463 [2024-04-24 20:06:18.476081] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.463 [2024-04-24 20:06:18.621173] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:36.722 [2024-04-24 20:06:18.730829] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.722 [2024-04-24 20:06:18.730889] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.722 [2024-04-24 20:06:18.730896] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.722 [2024-04-24 20:06:18.730902] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.722 [2024-04-24 20:06:18.730908] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.722 [2024-04-24 20:06:18.731059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.722 [2024-04-24 20:06:18.731178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.722 [2024-04-24 20:06:18.731281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:36.722 [2024-04-24 20:06:18.731283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.293 20:06:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:37.293 20:06:19 -- common/autotest_common.sh@850 -- # return 0 00:14:37.293 20:06:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:37.293 20:06:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:37.293 20:06:19 -- common/autotest_common.sh@10 -- # set +x 00:14:37.293 20:06:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.293 20:06:19 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:37.293 20:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.293 20:06:19 -- common/autotest_common.sh@10 -- # set +x 00:14:37.293 [2024-04-24 20:06:19.384456] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.293 20:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.293 20:06:19 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:37.293 20:06:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:37.293 20:06:19 -- common/autotest_common.sh@10 -- # set +x 00:14:37.293 20:06:19 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:37.293 20:06:19 -- target/host_management.sh@23 -- # cat 00:14:37.293 20:06:19 -- target/host_management.sh@30 -- # rpc_cmd 00:14:37.293 20:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.293 20:06:19 -- common/autotest_common.sh@10 -- # set +x 00:14:37.293 Malloc0 00:14:37.293 [2024-04-24 20:06:19.454936] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:37.293 [2024-04-24 20:06:19.455291] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.293 20:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.293 20:06:19 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:37.293 20:06:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:37.293 20:06:19 -- common/autotest_common.sh@10 -- # set +x 00:14:37.293 20:06:19 -- target/host_management.sh@73 -- # perfpid=65259 00:14:37.293 20:06:19 -- target/host_management.sh@74 -- # waitforlisten 65259 /var/tmp/bdevperf.sock 00:14:37.293 20:06:19 -- common/autotest_common.sh@817 -- # '[' -z 65259 ']' 00:14:37.293 20:06:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.293 20:06:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:37.293 20:06:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.293 20:06:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:37.293 20:06:19 -- common/autotest_common.sh@10 -- # set +x 00:14:37.293 20:06:19 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:37.293 20:06:19 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:37.293 20:06:19 -- nvmf/common.sh@521 -- # config=() 00:14:37.293 20:06:19 -- nvmf/common.sh@521 -- # local subsystem config 00:14:37.293 20:06:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:37.293 20:06:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:37.293 { 00:14:37.293 "params": { 00:14:37.293 "name": "Nvme$subsystem", 00:14:37.293 "trtype": "$TEST_TRANSPORT", 00:14:37.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:37.293 "adrfam": "ipv4", 00:14:37.293 "trsvcid": "$NVMF_PORT", 00:14:37.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:37.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:37.293 "hdgst": ${hdgst:-false}, 00:14:37.293 "ddgst": ${ddgst:-false} 00:14:37.293 }, 00:14:37.293 "method": "bdev_nvme_attach_controller" 00:14:37.293 } 00:14:37.293 EOF 00:14:37.293 )") 00:14:37.293 20:06:19 -- nvmf/common.sh@543 -- # cat 00:14:37.293 20:06:19 -- nvmf/common.sh@545 -- # jq . 00:14:37.293 20:06:19 -- nvmf/common.sh@546 -- # IFS=, 00:14:37.293 20:06:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:37.293 "params": { 00:14:37.293 "name": "Nvme0", 00:14:37.293 "trtype": "tcp", 00:14:37.293 "traddr": "10.0.0.2", 00:14:37.293 "adrfam": "ipv4", 00:14:37.293 "trsvcid": "4420", 00:14:37.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:37.293 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:37.293 "hdgst": false, 00:14:37.293 "ddgst": false 00:14:37.293 }, 00:14:37.293 "method": "bdev_nvme_attach_controller" 00:14:37.293 }' 00:14:37.553 [2024-04-24 20:06:19.559115] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:37.553 [2024-04-24 20:06:19.559202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65259 ] 00:14:37.553 [2024-04-24 20:06:19.713820] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.837 [2024-04-24 20:06:19.816037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.837 Running I/O for 10 seconds... 00:14:38.409 20:06:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:38.409 20:06:20 -- common/autotest_common.sh@850 -- # return 0 00:14:38.409 20:06:20 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:38.409 20:06:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.409 20:06:20 -- common/autotest_common.sh@10 -- # set +x 00:14:38.409 20:06:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.409 20:06:20 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:38.409 20:06:20 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:38.409 20:06:20 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:38.409 20:06:20 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:38.409 20:06:20 -- target/host_management.sh@52 -- # local ret=1 00:14:38.409 20:06:20 -- target/host_management.sh@53 -- # local i 00:14:38.409 20:06:20 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:38.409 20:06:20 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:38.409 20:06:20 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:38.409 20:06:20 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:38.409 20:06:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.409 20:06:20 -- common/autotest_common.sh@10 -- # set +x 00:14:38.409 20:06:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.409 20:06:20 -- target/host_management.sh@55 -- # read_io_count=835 00:14:38.409 20:06:20 -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:14:38.409 20:06:20 -- target/host_management.sh@59 -- # ret=0 00:14:38.409 20:06:20 -- target/host_management.sh@60 -- # break 00:14:38.409 20:06:20 -- target/host_management.sh@64 -- # return 0 00:14:38.409 20:06:20 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:38.409 20:06:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.409 20:06:20 -- common/autotest_common.sh@10 -- # set +x 00:14:38.409 [2024-04-24 20:06:20.514332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514474] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514480] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514618] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514624] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514688] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514693] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ab710 is same with the state(5) to be set 00:14:38.409 [2024-04-24 20:06:20.514821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.409 [2024-04-24 20:06:20.514855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.409 [2024-04-24 20:06:20.514874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.514881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.514890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.514897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.514905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.514912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.514920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.514926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.514934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.514940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.514949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.514955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.514964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.514970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.514978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.514984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.514992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.514999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.410 [2024-04-24 20:06:20.515489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.410 [2024-04-24 20:06:20.515495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:38.411 [2024-04-24 20:06:20.515814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:38.411 [2024-04-24 20:06:20.515823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x680ae0 is same with the state(5) to be set 00:14:38.411 [2024-04-24 20:06:20.515877] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x680ae0 was disconnected and freed. reset controller. 00:14:38.411 [2024-04-24 20:06:20.516830] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:38.411 task offset: 114688 on job bdev=Nvme0n1 fails 00:14:38.411 00:14:38.411 Latency(us) 00:14:38.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.411 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:38.411 Job: Nvme0n1 ended in about 0.54 seconds with error 00:14:38.411 Verification LBA range: start 0x0 length 0x400 00:14:38.411 Nvme0n1 : 0.54 1660.18 103.76 118.58 0.00 35086.09 3319.73 34113.06 00:14:38.411 =================================================================================================================== 00:14:38.411 Total : 1660.18 103.76 118.58 0.00 35086.09 3319.73 34113.06 00:14:38.411 20:06:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.411 20:06:20 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:38.411 20:06:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.411 [2024-04-24 20:06:20.518872] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:38.411 [2024-04-24 20:06:20.518895] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x65b1b0 (9): Bad file descriptor 00:14:38.411 20:06:20 -- common/autotest_common.sh@10 -- # set +x 00:14:38.411 [2024-04-24 20:06:20.524995] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:38.411 20:06:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:38.411 20:06:20 -- target/host_management.sh@87 -- # sleep 1 00:14:39.349 20:06:21 -- target/host_management.sh@91 -- # kill -9 65259 00:14:39.349 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65259) - No such process 00:14:39.349 20:06:21 -- target/host_management.sh@91 -- # true 00:14:39.349 20:06:21 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:39.349 20:06:21 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:39.349 20:06:21 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:39.349 20:06:21 -- nvmf/common.sh@521 -- # config=() 00:14:39.349 20:06:21 -- nvmf/common.sh@521 -- # local subsystem config 00:14:39.349 20:06:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:39.349 20:06:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:39.349 { 00:14:39.349 "params": { 00:14:39.349 "name": "Nvme$subsystem", 00:14:39.349 "trtype": "$TEST_TRANSPORT", 00:14:39.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:39.349 "adrfam": "ipv4", 00:14:39.349 "trsvcid": "$NVMF_PORT", 00:14:39.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:39.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:39.349 "hdgst": ${hdgst:-false}, 00:14:39.349 "ddgst": ${ddgst:-false} 00:14:39.349 }, 00:14:39.349 "method": "bdev_nvme_attach_controller" 00:14:39.349 } 00:14:39.349 EOF 00:14:39.349 )") 00:14:39.349 20:06:21 -- nvmf/common.sh@543 -- # cat 00:14:39.349 20:06:21 -- nvmf/common.sh@545 -- # jq . 00:14:39.349 20:06:21 -- nvmf/common.sh@546 -- # IFS=, 00:14:39.349 20:06:21 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:39.349 "params": { 00:14:39.349 "name": "Nvme0", 00:14:39.349 "trtype": "tcp", 00:14:39.349 "traddr": "10.0.0.2", 00:14:39.349 "adrfam": "ipv4", 00:14:39.349 "trsvcid": "4420", 00:14:39.349 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:39.349 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:39.349 "hdgst": false, 00:14:39.349 "ddgst": false 00:14:39.349 }, 00:14:39.349 "method": "bdev_nvme_attach_controller" 00:14:39.349 }' 00:14:39.349 [2024-04-24 20:06:21.584716] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:39.349 [2024-04-24 20:06:21.584791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65296 ] 00:14:39.609 [2024-04-24 20:06:21.722649] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.609 [2024-04-24 20:06:21.828584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.867 Running I/O for 1 seconds... 00:14:40.804 00:14:40.804 Latency(us) 00:14:40.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.804 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:40.804 Verification LBA range: start 0x0 length 0x400 00:14:40.804 Nvme0n1 : 1.02 1883.39 117.71 0.00 0.00 33388.86 3691.77 31594.65 00:14:40.804 =================================================================================================================== 00:14:40.804 Total : 1883.39 117.71 0.00 0.00 33388.86 3691.77 31594.65 00:14:41.062 20:06:23 -- target/host_management.sh@102 -- # stoptarget 00:14:41.062 20:06:23 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:41.062 20:06:23 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:41.062 20:06:23 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:41.062 20:06:23 -- target/host_management.sh@40 -- # nvmftestfini 00:14:41.062 20:06:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:41.062 20:06:23 -- nvmf/common.sh@117 -- # sync 00:14:41.322 20:06:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:41.322 20:06:23 -- nvmf/common.sh@120 -- # set +e 00:14:41.322 20:06:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:41.322 20:06:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:41.322 rmmod nvme_tcp 00:14:41.322 rmmod nvme_fabrics 00:14:41.322 rmmod nvme_keyring 00:14:41.322 20:06:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:41.322 20:06:23 -- nvmf/common.sh@124 -- # set -e 00:14:41.322 20:06:23 -- nvmf/common.sh@125 -- # return 0 00:14:41.322 20:06:23 -- nvmf/common.sh@478 -- # '[' -n 65204 ']' 00:14:41.322 20:06:23 -- nvmf/common.sh@479 -- # killprocess 65204 00:14:41.322 20:06:23 -- common/autotest_common.sh@936 -- # '[' -z 65204 ']' 00:14:41.322 20:06:23 -- common/autotest_common.sh@940 -- # kill -0 65204 00:14:41.322 20:06:23 -- common/autotest_common.sh@941 -- # uname 00:14:41.322 20:06:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:41.322 20:06:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65204 00:14:41.322 killing process with pid 65204 00:14:41.322 20:06:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:41.322 20:06:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:41.322 20:06:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65204' 00:14:41.322 20:06:23 -- common/autotest_common.sh@955 -- # kill 65204 00:14:41.322 [2024-04-24 20:06:23.438445] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:41.322 20:06:23 -- common/autotest_common.sh@960 -- # wait 65204 00:14:41.581 [2024-04-24 20:06:23.654571] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:41.581 20:06:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:41.581 20:06:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:41.581 20:06:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:41.581 20:06:23 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.581 20:06:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:41.581 20:06:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.581 20:06:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.581 20:06:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.581 20:06:23 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:41.581 00:14:41.581 real 0m5.326s 00:14:41.581 user 0m22.189s 00:14:41.581 sys 0m1.163s 00:14:41.581 20:06:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:41.581 20:06:23 -- common/autotest_common.sh@10 -- # set +x 00:14:41.581 ************************************ 00:14:41.581 END TEST nvmf_host_management 00:14:41.581 ************************************ 00:14:41.581 20:06:23 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:41.581 00:14:41.581 real 0m6.024s 00:14:41.581 user 0m22.381s 00:14:41.581 sys 0m1.489s 00:14:41.581 20:06:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:41.581 20:06:23 -- common/autotest_common.sh@10 -- # set +x 00:14:41.581 ************************************ 00:14:41.581 END TEST nvmf_host_management 00:14:41.581 ************************************ 00:14:41.841 20:06:23 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:41.841 20:06:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:41.841 20:06:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:41.841 20:06:23 -- common/autotest_common.sh@10 -- # set +x 00:14:41.841 ************************************ 00:14:41.841 START TEST nvmf_lvol 00:14:41.841 ************************************ 00:14:41.841 20:06:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:41.841 * Looking for test storage... 00:14:41.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:41.841 20:06:24 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:41.841 20:06:24 -- nvmf/common.sh@7 -- # uname -s 00:14:41.841 20:06:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.841 20:06:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.841 20:06:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.841 20:06:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.841 20:06:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.841 20:06:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.841 20:06:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.841 20:06:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.841 20:06:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.841 20:06:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.841 20:06:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:14:41.841 20:06:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:14:41.841 20:06:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.841 20:06:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.841 20:06:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:41.841 20:06:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.841 20:06:24 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:42.100 20:06:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.100 20:06:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.101 20:06:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.101 20:06:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.101 20:06:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.101 20:06:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.101 20:06:24 -- paths/export.sh@5 -- # export PATH 00:14:42.101 20:06:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.101 20:06:24 -- nvmf/common.sh@47 -- # : 0 00:14:42.101 20:06:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:42.101 20:06:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:42.101 20:06:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.101 20:06:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.101 20:06:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.101 20:06:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:42.101 20:06:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:42.101 20:06:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:42.101 20:06:24 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:42.101 20:06:24 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:42.101 20:06:24 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:42.101 20:06:24 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:42.101 20:06:24 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:42.101 20:06:24 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:42.101 20:06:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:42.101 20:06:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.101 20:06:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:42.101 20:06:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:42.101 20:06:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:42.101 20:06:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.101 20:06:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.101 20:06:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.101 20:06:24 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:42.101 20:06:24 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:42.101 20:06:24 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:42.101 20:06:24 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:42.101 20:06:24 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:42.101 20:06:24 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:42.101 20:06:24 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.101 20:06:24 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.101 20:06:24 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:42.101 20:06:24 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:42.101 20:06:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:42.101 20:06:24 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:42.101 20:06:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:42.101 20:06:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.101 20:06:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:42.101 20:06:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:42.101 20:06:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:42.101 20:06:24 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:42.101 20:06:24 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:42.101 20:06:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:42.101 Cannot find device "nvmf_tgt_br" 00:14:42.101 20:06:24 -- nvmf/common.sh@155 -- # true 00:14:42.101 20:06:24 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:42.101 Cannot find device "nvmf_tgt_br2" 00:14:42.101 20:06:24 -- nvmf/common.sh@156 -- # true 00:14:42.101 20:06:24 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:42.101 20:06:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:42.101 Cannot find device "nvmf_tgt_br" 00:14:42.101 20:06:24 -- nvmf/common.sh@158 -- # true 00:14:42.101 20:06:24 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:42.101 Cannot find device "nvmf_tgt_br2" 00:14:42.101 20:06:24 -- nvmf/common.sh@159 -- # true 00:14:42.101 20:06:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:42.101 20:06:24 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:42.101 20:06:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:42.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.101 20:06:24 -- nvmf/common.sh@162 -- # true 00:14:42.101 20:06:24 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:42.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:42.101 20:06:24 -- nvmf/common.sh@163 -- # true 00:14:42.101 20:06:24 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:42.101 20:06:24 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:42.101 20:06:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:42.101 20:06:24 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:42.101 20:06:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:42.101 20:06:24 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:42.101 20:06:24 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:42.101 20:06:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:42.101 20:06:24 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:42.360 20:06:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:42.360 20:06:24 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:42.360 20:06:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:42.360 20:06:24 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:42.360 20:06:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:42.360 20:06:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:42.360 20:06:24 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:42.360 20:06:24 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:42.360 20:06:24 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:42.360 20:06:24 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:42.360 20:06:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:42.360 20:06:24 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:42.360 20:06:24 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:42.360 20:06:24 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:42.360 20:06:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:42.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:42.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:14:42.360 00:14:42.360 --- 10.0.0.2 ping statistics --- 00:14:42.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.360 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:42.360 20:06:24 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:42.360 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:42.360 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:14:42.360 00:14:42.360 --- 10.0.0.3 ping statistics --- 00:14:42.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.360 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:42.360 20:06:24 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:42.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:42.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:42.360 00:14:42.360 --- 10.0.0.1 ping statistics --- 00:14:42.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:42.360 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:42.360 20:06:24 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:42.360 20:06:24 -- nvmf/common.sh@422 -- # return 0 00:14:42.360 20:06:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:42.360 20:06:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:42.360 20:06:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:42.360 20:06:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:42.360 20:06:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:42.360 20:06:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:42.360 20:06:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:42.360 20:06:24 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:42.360 20:06:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:42.360 20:06:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:42.360 20:06:24 -- common/autotest_common.sh@10 -- # set +x 00:14:42.360 20:06:24 -- nvmf/common.sh@470 -- # nvmfpid=65534 00:14:42.360 20:06:24 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:42.360 20:06:24 -- nvmf/common.sh@471 -- # waitforlisten 65534 00:14:42.360 20:06:24 -- common/autotest_common.sh@817 -- # '[' -z 65534 ']' 00:14:42.360 20:06:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.360 20:06:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:42.360 20:06:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.361 20:06:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:42.361 20:06:24 -- common/autotest_common.sh@10 -- # set +x 00:14:42.361 [2024-04-24 20:06:24.507159] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:42.361 [2024-04-24 20:06:24.507228] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.618 [2024-04-24 20:06:24.648702] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:42.618 [2024-04-24 20:06:24.752818] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.618 [2024-04-24 20:06:24.752866] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.619 [2024-04-24 20:06:24.752889] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.619 [2024-04-24 20:06:24.752895] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.619 [2024-04-24 20:06:24.752900] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.619 [2024-04-24 20:06:24.753186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.619 [2024-04-24 20:06:24.753059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.619 [2024-04-24 20:06:24.753189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.189 20:06:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:43.189 20:06:25 -- common/autotest_common.sh@850 -- # return 0 00:14:43.189 20:06:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:43.189 20:06:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:43.189 20:06:25 -- common/autotest_common.sh@10 -- # set +x 00:14:43.189 20:06:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.189 20:06:25 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:43.449 [2024-04-24 20:06:25.595736] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.449 20:06:25 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:43.708 20:06:25 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:43.708 20:06:25 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:43.967 20:06:26 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:43.967 20:06:26 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:44.227 20:06:26 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:44.227 20:06:26 -- target/nvmf_lvol.sh@29 -- # lvs=5d7d15cd-4150-4017-bfd8-ad9a0bfff7a1 00:14:44.227 20:06:26 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5d7d15cd-4150-4017-bfd8-ad9a0bfff7a1 lvol 20 00:14:44.486 20:06:26 -- target/nvmf_lvol.sh@32 -- # lvol=5d5c5298-0c62-4504-95e3-2e59ac37b671 00:14:44.486 20:06:26 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:44.745 20:06:26 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5d5c5298-0c62-4504-95e3-2e59ac37b671 00:14:44.745 20:06:26 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:45.005 [2024-04-24 20:06:27.160009] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:45.005 [2024-04-24 20:06:27.160299] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.005 20:06:27 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.264 20:06:27 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:45.264 20:06:27 -- target/nvmf_lvol.sh@42 -- # perf_pid=65604 00:14:45.264 20:06:27 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:46.200 20:06:28 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 5d5c5298-0c62-4504-95e3-2e59ac37b671 MY_SNAPSHOT 00:14:46.459 20:06:28 -- target/nvmf_lvol.sh@47 -- # snapshot=879b286b-f7e0-4245-b5d5-8064010d1250 00:14:46.459 20:06:28 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 5d5c5298-0c62-4504-95e3-2e59ac37b671 30 00:14:46.717 20:06:28 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 879b286b-f7e0-4245-b5d5-8064010d1250 MY_CLONE 00:14:46.977 20:06:29 -- target/nvmf_lvol.sh@49 -- # clone=d58c0d50-3298-46a3-976c-c7a41338013b 00:14:46.977 20:06:29 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate d58c0d50-3298-46a3-976c-c7a41338013b 00:14:47.236 20:06:29 -- target/nvmf_lvol.sh@53 -- # wait 65604 00:14:55.426 Initializing NVMe Controllers 00:14:55.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:55.426 Controller IO queue size 128, less than required. 00:14:55.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:55.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:55.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:55.426 Initialization complete. Launching workers. 00:14:55.426 ======================================================== 00:14:55.426 Latency(us) 00:14:55.426 Device Information : IOPS MiB/s Average min max 00:14:55.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9849.59 38.47 13002.97 2159.67 74432.10 00:14:55.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10087.49 39.40 12689.59 182.38 60006.80 00:14:55.426 ======================================================== 00:14:55.426 Total : 19937.08 77.88 12844.41 182.38 74432.10 00:14:55.426 00:14:55.426 20:06:37 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:55.685 20:06:37 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5d5c5298-0c62-4504-95e3-2e59ac37b671 00:14:55.944 20:06:38 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5d7d15cd-4150-4017-bfd8-ad9a0bfff7a1 00:14:56.204 20:06:38 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:56.204 20:06:38 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:56.204 20:06:38 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:56.204 20:06:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:56.204 20:06:38 -- nvmf/common.sh@117 -- # sync 00:14:56.204 20:06:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:56.204 20:06:38 -- nvmf/common.sh@120 -- # set +e 00:14:56.204 20:06:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.204 20:06:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:56.204 rmmod nvme_tcp 00:14:56.204 rmmod nvme_fabrics 00:14:56.204 rmmod nvme_keyring 00:14:56.204 20:06:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.204 20:06:38 -- nvmf/common.sh@124 -- # set -e 00:14:56.204 20:06:38 -- nvmf/common.sh@125 -- # return 0 00:14:56.204 20:06:38 -- nvmf/common.sh@478 -- # '[' -n 65534 ']' 00:14:56.204 20:06:38 -- nvmf/common.sh@479 -- # killprocess 65534 00:14:56.204 20:06:38 -- common/autotest_common.sh@936 -- # '[' -z 65534 ']' 00:14:56.204 20:06:38 -- common/autotest_common.sh@940 -- # kill -0 65534 00:14:56.204 20:06:38 -- common/autotest_common.sh@941 -- # uname 00:14:56.204 20:06:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:56.204 20:06:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65534 00:14:56.205 20:06:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:56.205 20:06:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:56.205 killing process with pid 65534 00:14:56.205 20:06:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65534' 00:14:56.205 20:06:38 -- common/autotest_common.sh@955 -- # kill 65534 00:14:56.205 [2024-04-24 20:06:38.395956] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:56.205 20:06:38 -- common/autotest_common.sh@960 -- # wait 65534 00:14:56.465 20:06:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:56.465 20:06:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:56.465 20:06:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:56.465 20:06:38 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.465 20:06:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.465 20:06:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.465 20:06:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.465 20:06:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.465 20:06:38 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:56.465 00:14:56.465 real 0m14.774s 00:14:56.465 user 1m1.646s 00:14:56.465 sys 0m3.937s 00:14:56.465 20:06:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:56.465 20:06:38 -- common/autotest_common.sh@10 -- # set +x 00:14:56.465 ************************************ 00:14:56.465 END TEST nvmf_lvol 00:14:56.465 ************************************ 00:14:56.726 20:06:38 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:56.726 20:06:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:56.726 20:06:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:56.726 20:06:38 -- common/autotest_common.sh@10 -- # set +x 00:14:56.726 ************************************ 00:14:56.726 START TEST nvmf_lvs_grow 00:14:56.726 ************************************ 00:14:56.726 20:06:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:56.726 * Looking for test storage... 00:14:56.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:56.726 20:06:38 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.988 20:06:38 -- nvmf/common.sh@7 -- # uname -s 00:14:56.988 20:06:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.988 20:06:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.988 20:06:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.988 20:06:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.988 20:06:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.988 20:06:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.988 20:06:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.988 20:06:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.988 20:06:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.988 20:06:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.988 20:06:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:14:56.988 20:06:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:14:56.988 20:06:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.988 20:06:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.988 20:06:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:56.988 20:06:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.988 20:06:39 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.988 20:06:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.988 20:06:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.988 20:06:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.988 20:06:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.988 20:06:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.988 20:06:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.988 20:06:39 -- paths/export.sh@5 -- # export PATH 00:14:56.988 20:06:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.988 20:06:39 -- nvmf/common.sh@47 -- # : 0 00:14:56.988 20:06:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.988 20:06:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.988 20:06:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.988 20:06:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.988 20:06:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.988 20:06:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.988 20:06:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.988 20:06:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.988 20:06:39 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.988 20:06:39 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:56.988 20:06:39 -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:56.988 20:06:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:56.988 20:06:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.988 20:06:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:56.988 20:06:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:56.988 20:06:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:56.988 20:06:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.988 20:06:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.988 20:06:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.988 20:06:39 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:56.988 20:06:39 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:56.988 20:06:39 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:56.988 20:06:39 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:56.988 20:06:39 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:56.988 20:06:39 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:56.988 20:06:39 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.988 20:06:39 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.988 20:06:39 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:56.988 20:06:39 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:56.988 20:06:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:56.988 20:06:39 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:56.988 20:06:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:56.988 20:06:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.988 20:06:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:56.988 20:06:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:56.988 20:06:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:56.988 20:06:39 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:56.988 20:06:39 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:56.988 20:06:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:56.988 Cannot find device "nvmf_tgt_br" 00:14:56.988 20:06:39 -- nvmf/common.sh@155 -- # true 00:14:56.988 20:06:39 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.988 Cannot find device "nvmf_tgt_br2" 00:14:56.988 20:06:39 -- nvmf/common.sh@156 -- # true 00:14:56.988 20:06:39 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:56.988 20:06:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:56.988 Cannot find device "nvmf_tgt_br" 00:14:56.988 20:06:39 -- nvmf/common.sh@158 -- # true 00:14:56.988 20:06:39 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:56.988 Cannot find device "nvmf_tgt_br2" 00:14:56.988 20:06:39 -- nvmf/common.sh@159 -- # true 00:14:56.988 20:06:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:56.988 20:06:39 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:56.988 20:06:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.988 20:06:39 -- nvmf/common.sh@162 -- # true 00:14:56.988 20:06:39 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.988 20:06:39 -- nvmf/common.sh@163 -- # true 00:14:56.988 20:06:39 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:56.988 20:06:39 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:56.988 20:06:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:56.988 20:06:39 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:57.249 20:06:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:57.249 20:06:39 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:57.249 20:06:39 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:57.249 20:06:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:57.249 20:06:39 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:57.249 20:06:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:57.249 20:06:39 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:57.249 20:06:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:57.249 20:06:39 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:57.249 20:06:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:57.249 20:06:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:57.249 20:06:39 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:57.249 20:06:39 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:57.249 20:06:39 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:57.249 20:06:39 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:57.249 20:06:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:57.249 20:06:39 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:57.249 20:06:39 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:57.249 20:06:39 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:57.249 20:06:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:57.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:14:57.249 00:14:57.249 --- 10.0.0.2 ping statistics --- 00:14:57.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.249 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:14:57.249 20:06:39 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:57.249 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:57.249 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:14:57.249 00:14:57.249 --- 10.0.0.3 ping statistics --- 00:14:57.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.249 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:14:57.249 20:06:39 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:57.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:14:57.249 00:14:57.249 --- 10.0.0.1 ping statistics --- 00:14:57.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.249 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:14:57.249 20:06:39 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.249 20:06:39 -- nvmf/common.sh@422 -- # return 0 00:14:57.249 20:06:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:57.249 20:06:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.249 20:06:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:57.249 20:06:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:57.249 20:06:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.249 20:06:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:57.249 20:06:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:57.249 20:06:39 -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:57.249 20:06:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:57.249 20:06:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:57.250 20:06:39 -- common/autotest_common.sh@10 -- # set +x 00:14:57.250 20:06:39 -- nvmf/common.sh@470 -- # nvmfpid=65925 00:14:57.250 20:06:39 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:57.250 20:06:39 -- nvmf/common.sh@471 -- # waitforlisten 65925 00:14:57.250 20:06:39 -- common/autotest_common.sh@817 -- # '[' -z 65925 ']' 00:14:57.250 20:06:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.250 20:06:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:57.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.250 20:06:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.250 20:06:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:57.250 20:06:39 -- common/autotest_common.sh@10 -- # set +x 00:14:57.250 [2024-04-24 20:06:39.440167] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:14:57.250 [2024-04-24 20:06:39.440236] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.511 [2024-04-24 20:06:39.577520] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.511 [2024-04-24 20:06:39.683050] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.511 [2024-04-24 20:06:39.683105] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.511 [2024-04-24 20:06:39.683113] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.511 [2024-04-24 20:06:39.683119] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.511 [2024-04-24 20:06:39.683123] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.511 [2024-04-24 20:06:39.683164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.169 20:06:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:58.169 20:06:40 -- common/autotest_common.sh@850 -- # return 0 00:14:58.169 20:06:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:58.169 20:06:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:58.169 20:06:40 -- common/autotest_common.sh@10 -- # set +x 00:14:58.169 20:06:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.169 20:06:40 -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:58.428 [2024-04-24 20:06:40.538305] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.428 20:06:40 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:58.428 20:06:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:58.428 20:06:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:58.428 20:06:40 -- common/autotest_common.sh@10 -- # set +x 00:14:58.428 ************************************ 00:14:58.428 START TEST lvs_grow_clean 00:14:58.428 ************************************ 00:14:58.428 20:06:40 -- common/autotest_common.sh@1111 -- # lvs_grow 00:14:58.428 20:06:40 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:58.428 20:06:40 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:58.428 20:06:40 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:58.428 20:06:40 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:58.428 20:06:40 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:58.428 20:06:40 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:58.428 20:06:40 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:58.428 20:06:40 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:58.428 20:06:40 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:58.687 20:06:40 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:58.687 20:06:40 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:58.945 20:06:41 -- target/nvmf_lvs_grow.sh@28 -- # lvs=113aced0-0e8d-4ba1-b091-1c1b70688d1c 00:14:58.945 20:06:41 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 113aced0-0e8d-4ba1-b091-1c1b70688d1c 00:14:58.945 20:06:41 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:59.205 20:06:41 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:59.205 20:06:41 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:59.205 20:06:41 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 113aced0-0e8d-4ba1-b091-1c1b70688d1c lvol 150 00:14:59.463 20:06:41 -- target/nvmf_lvs_grow.sh@33 -- # lvol=4d2e9c7e-4627-4b6f-bd7f-070e9312f06f 00:14:59.463 20:06:41 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:59.463 20:06:41 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:59.463 [2024-04-24 20:06:41.675417] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:59.463 [2024-04-24 20:06:41.675505] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:59.463 true 00:14:59.463 20:06:41 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 113aced0-0e8d-4ba1-b091-1c1b70688d1c 00:14:59.463 20:06:41 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:59.722 20:06:41 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:59.722 20:06:41 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:59.981 20:06:42 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4d2e9c7e-4627-4b6f-bd7f-070e9312f06f 00:15:00.241 20:06:42 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:00.499 [2024-04-24 20:06:42.518212] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:00.499 [2024-04-24 20:06:42.518460] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.499 20:06:42 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:00.758 20:06:42 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66012 00:15:00.758 20:06:42 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:00.758 20:06:42 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:00.758 20:06:42 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66012 /var/tmp/bdevperf.sock 00:15:00.758 20:06:42 -- common/autotest_common.sh@817 -- # '[' -z 66012 ']' 00:15:00.758 20:06:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.758 20:06:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:00.758 20:06:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.758 20:06:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:00.758 20:06:42 -- common/autotest_common.sh@10 -- # set +x 00:15:00.758 [2024-04-24 20:06:42.808343] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:00.758 [2024-04-24 20:06:42.808805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66012 ] 00:15:00.758 [2024-04-24 20:06:42.946670] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.017 [2024-04-24 20:06:43.050622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.584 20:06:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:01.584 20:06:43 -- common/autotest_common.sh@850 -- # return 0 00:15:01.584 20:06:43 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:01.843 Nvme0n1 00:15:01.843 20:06:43 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:02.100 [ 00:15:02.101 { 00:15:02.101 "name": "Nvme0n1", 00:15:02.101 "aliases": [ 00:15:02.101 "4d2e9c7e-4627-4b6f-bd7f-070e9312f06f" 00:15:02.101 ], 00:15:02.101 "product_name": "NVMe disk", 00:15:02.101 "block_size": 4096, 00:15:02.101 "num_blocks": 38912, 00:15:02.101 "uuid": "4d2e9c7e-4627-4b6f-bd7f-070e9312f06f", 00:15:02.101 "assigned_rate_limits": { 00:15:02.101 "rw_ios_per_sec": 0, 00:15:02.101 "rw_mbytes_per_sec": 0, 00:15:02.101 "r_mbytes_per_sec": 0, 00:15:02.101 "w_mbytes_per_sec": 0 00:15:02.101 }, 00:15:02.101 "claimed": false, 00:15:02.101 "zoned": false, 00:15:02.101 "supported_io_types": { 00:15:02.101 "read": true, 00:15:02.101 "write": true, 00:15:02.101 "unmap": true, 00:15:02.101 "write_zeroes": true, 00:15:02.101 "flush": true, 00:15:02.101 "reset": true, 00:15:02.101 "compare": true, 00:15:02.101 "compare_and_write": true, 00:15:02.101 "abort": true, 00:15:02.101 "nvme_admin": true, 00:15:02.101 "nvme_io": true 00:15:02.101 }, 00:15:02.101 "memory_domains": [ 00:15:02.101 { 00:15:02.101 "dma_device_id": "system", 00:15:02.101 "dma_device_type": 1 00:15:02.101 } 00:15:02.101 ], 00:15:02.101 "driver_specific": { 00:15:02.101 "nvme": [ 00:15:02.101 { 00:15:02.101 "trid": { 00:15:02.101 "trtype": "TCP", 00:15:02.101 "adrfam": "IPv4", 00:15:02.101 "traddr": "10.0.0.2", 00:15:02.101 "trsvcid": "4420", 00:15:02.101 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:02.101 }, 00:15:02.101 "ctrlr_data": { 00:15:02.101 "cntlid": 1, 00:15:02.101 "vendor_id": "0x8086", 00:15:02.101 "model_number": "SPDK bdev Controller", 00:15:02.101 "serial_number": "SPDK0", 00:15:02.101 "firmware_revision": "24.05", 00:15:02.101 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:02.101 "oacs": { 00:15:02.101 "security": 0, 00:15:02.101 "format": 0, 00:15:02.101 "firmware": 0, 00:15:02.101 "ns_manage": 0 00:15:02.101 }, 00:15:02.101 "multi_ctrlr": true, 00:15:02.101 "ana_reporting": false 00:15:02.101 }, 00:15:02.101 "vs": { 00:15:02.101 "nvme_version": "1.3" 00:15:02.101 }, 00:15:02.101 "ns_data": { 00:15:02.101 "id": 1, 00:15:02.101 "can_share": true 00:15:02.101 } 00:15:02.101 } 00:15:02.101 ], 00:15:02.101 "mp_policy": "active_passive" 00:15:02.101 } 00:15:02.101 } 00:15:02.101 ] 00:15:02.101 20:06:44 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66030 00:15:02.101 20:06:44 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:02.101 20:06:44 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:02.101 Running I/O for 10 seconds... 00:15:03.045 Latency(us) 00:15:03.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.045 Nvme0n1 : 1.00 9008.00 35.19 0.00 0.00 0.00 0.00 0.00 00:15:03.045 =================================================================================================================== 00:15:03.045 Total : 9008.00 35.19 0.00 0.00 0.00 0.00 0.00 00:15:03.045 00:15:03.983 20:06:46 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 113aced0-0e8d-4ba1-b091-1c1b70688d1c 00:15:04.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.242 Nvme0n1 : 2.00 8822.00 34.46 0.00 0.00 0.00 0.00 0.00 00:15:04.242 =================================================================================================================== 00:15:04.242 Total : 8822.00 34.46 0.00 0.00 0.00 0.00 0.00 00:15:04.242 00:15:04.242 true 00:15:04.242 20:06:46 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 113aced0-0e8d-4ba1-b091-1c1b70688d1c 00:15:04.242 20:06:46 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:04.512 20:06:46 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:04.512 20:06:46 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:04.512 20:06:46 -- target/nvmf_lvs_grow.sh@65 -- # wait 66030 00:15:05.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.086 Nvme0n1 : 3.00 8844.67 34.55 0.00 0.00 0.00 0.00 0.00 00:15:05.086 =================================================================================================================== 00:15:05.086 Total : 8844.67 34.55 0.00 0.00 0.00 0.00 0.00 00:15:05.086 00:15:06.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.460 Nvme0n1 : 4.00 8824.25 34.47 0.00 0.00 0.00 0.00 0.00 00:15:06.460 =================================================================================================================== 00:15:06.460 Total : 8824.25 34.47 0.00 0.00 0.00 0.00 0.00 00:15:06.460 00:15:07.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.400 Nvme0n1 : 5.00 8812.00 34.42 0.00 0.00 0.00 0.00 0.00 00:15:07.400 =================================================================================================================== 00:15:07.400 Total : 8812.00 34.42 0.00 0.00 0.00 0.00 0.00 00:15:07.400 00:15:08.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.334 Nvme0n1 : 6.00 8761.50 34.22 0.00 0.00 0.00 0.00 0.00 00:15:08.334 =================================================================================================================== 00:15:08.334 Total : 8761.50 34.22 0.00 0.00 0.00 0.00 0.00 00:15:08.334 00:15:09.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.311 Nvme0n1 : 7.00 8725.43 34.08 0.00 0.00 0.00 0.00 0.00 00:15:09.311 =================================================================================================================== 00:15:09.311 Total : 8725.43 34.08 0.00 0.00 0.00 0.00 0.00 00:15:09.311 00:15:10.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:10.242 Nvme0n1 : 8.00 8698.12 33.98 0.00 0.00 0.00 0.00 0.00 00:15:10.242 =================================================================================================================== 00:15:10.242 Total : 8698.12 33.98 0.00 0.00 0.00 0.00 0.00 00:15:10.242 00:15:11.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:11.172 Nvme0n1 : 9.00 8691.22 33.95 0.00 0.00 0.00 0.00 0.00 00:15:11.172 =================================================================================================================== 00:15:11.172 Total : 8691.22 33.95 0.00 0.00 0.00 0.00 0.00 00:15:11.172 00:15:12.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.108 Nvme0n1 : 10.00 8685.70 33.93 0.00 0.00 0.00 0.00 0.00 00:15:12.108 =================================================================================================================== 00:15:12.108 Total : 8685.70 33.93 0.00 0.00 0.00 0.00 0.00 00:15:12.108 00:15:12.108 00:15:12.108 Latency(us) 00:15:12.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:12.108 Nvme0n1 : 10.01 8693.46 33.96 0.00 0.00 14719.58 10760.50 38234.10 00:15:12.108 =================================================================================================================== 00:15:12.108 Total : 8693.46 33.96 0.00 0.00 14719.58 10760.50 38234.10 00:15:12.108 0 00:15:12.108 20:06:54 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66012 00:15:12.108 20:06:54 -- common/autotest_common.sh@936 -- # '[' -z 66012 ']' 00:15:12.108 20:06:54 -- common/autotest_common.sh@940 -- # kill -0 66012 00:15:12.108 20:06:54 -- common/autotest_common.sh@941 -- # uname 00:15:12.108 20:06:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.108 20:06:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66012 00:15:12.108 killing process with pid 66012 00:15:12.108 Received shutdown signal, test time was about 10.000000 seconds 00:15:12.108 00:15:12.108 Latency(us) 00:15:12.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.108 =================================================================================================================== 00:15:12.108 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:12.108 20:06:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:12.108 20:06:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:12.108 20:06:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66012' 00:15:12.108 20:06:54 -- common/autotest_common.sh@955 -- # kill 66012 00:15:12.108 20:06:54 -- common/autotest_common.sh@960 -- # wait 66012 00:15:12.366 20:06:54 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:12.624 20:06:54 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:12.883 20:06:55 -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 113aced0-0e8d-4ba1-b091-1c1b70688d1c 00:15:12.883 20:06:55 -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:13.141 20:06:55 -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:13.141 20:06:55 -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:13.141 20:06:55 -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:13.141 [2024-04-24 20:06:55.373104] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:13.399 20:06:55 -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 113aced0-0e8d-4ba1-b091-1c1b70688d1c 00:15:13.400 20:06:55 -- common/autotest_common.sh@638 -- # local es=0 00:15:13.400 20:06:55 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 113aced0-0e8d-4ba1-b091-1c1b70688d1c 00:15:13.400 20:06:55 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.400 20:06:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:13.400 20:06:55 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.400 20:06:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:13.400 20:06:55 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.400 20:06:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:13.400 20:06:55 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.400 20:06:55 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:13.400 20:06:55 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 113aced0-0e8d-4ba1-b091-1c1b70688d1c 00:15:13.400 request: 00:15:13.400 { 00:15:13.400 "uuid": "113aced0-0e8d-4ba1-b091-1c1b70688d1c", 00:15:13.400 "method": "bdev_lvol_get_lvstores", 00:15:13.400 "req_id": 1 00:15:13.400 } 00:15:13.400 Got JSON-RPC error response 00:15:13.400 response: 00:15:13.400 { 00:15:13.400 "code": -19, 00:15:13.400 "message": "No such device" 00:15:13.400 } 00:15:13.400 20:06:55 -- common/autotest_common.sh@641 -- # es=1 00:15:13.400 20:06:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:13.400 20:06:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:13.400 20:06:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:13.400 20:06:55 -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:13.657 aio_bdev 00:15:13.657 20:06:55 -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4d2e9c7e-4627-4b6f-bd7f-070e9312f06f 00:15:13.657 20:06:55 -- common/autotest_common.sh@885 -- # local bdev_name=4d2e9c7e-4627-4b6f-bd7f-070e9312f06f 00:15:13.657 20:06:55 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:13.657 20:06:55 -- common/autotest_common.sh@887 -- # local i 00:15:13.657 20:06:55 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:13.657 20:06:55 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:13.657 20:06:55 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:13.916 20:06:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4d2e9c7e-4627-4b6f-bd7f-070e9312f06f -t 2000 00:15:14.175 [ 00:15:14.175 { 00:15:14.175 "name": "4d2e9c7e-4627-4b6f-bd7f-070e9312f06f", 00:15:14.175 "aliases": [ 00:15:14.175 "lvs/lvol" 00:15:14.175 ], 00:15:14.175 "product_name": "Logical Volume", 00:15:14.175 "block_size": 4096, 00:15:14.175 "num_blocks": 38912, 00:15:14.175 "uuid": "4d2e9c7e-4627-4b6f-bd7f-070e9312f06f", 00:15:14.175 "assigned_rate_limits": { 00:15:14.175 "rw_ios_per_sec": 0, 00:15:14.175 "rw_mbytes_per_sec": 0, 00:15:14.175 "r_mbytes_per_sec": 0, 00:15:14.175 "w_mbytes_per_sec": 0 00:15:14.175 }, 00:15:14.175 "claimed": false, 00:15:14.175 "zoned": false, 00:15:14.175 "supported_io_types": { 00:15:14.175 "read": true, 00:15:14.175 "write": true, 00:15:14.175 "unmap": true, 00:15:14.175 "write_zeroes": true, 00:15:14.175 "flush": false, 00:15:14.175 "reset": true, 00:15:14.175 "compare": false, 00:15:14.175 "compare_and_write": false, 00:15:14.175 "abort": false, 00:15:14.175 "nvme_admin": false, 00:15:14.175 "nvme_io": false 00:15:14.175 }, 00:15:14.175 "driver_specific": { 00:15:14.175 "lvol": { 00:15:14.175 "lvol_store_uuid": "113aced0-0e8d-4ba1-b091-1c1b70688d1c", 00:15:14.175 "base_bdev": "aio_bdev", 00:15:14.175 "thin_provision": false, 00:15:14.175 "snapshot": false, 00:15:14.175 "clone": false, 00:15:14.175 "esnap_clone": false 00:15:14.175 } 00:15:14.175 } 00:15:14.175 } 00:15:14.175 ] 00:15:14.175 20:06:56 -- common/autotest_common.sh@893 -- # return 0 00:15:14.175 20:06:56 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 113aced0-0e8d-4ba1-b091-1c1b70688d1c 00:15:14.175 20:06:56 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:14.435 20:06:56 -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:14.435 20:06:56 -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 113aced0-0e8d-4ba1-b091-1c1b70688d1c 00:15:14.435 20:06:56 -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:14.695 20:06:56 -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:14.695 20:06:56 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4d2e9c7e-4627-4b6f-bd7f-070e9312f06f 00:15:14.954 20:06:57 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 113aced0-0e8d-4ba1-b091-1c1b70688d1c 00:15:15.214 20:06:57 -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:15.214 20:06:57 -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:15.782 00:15:15.782 real 0m17.182s 00:15:15.782 user 0m16.199s 00:15:15.782 sys 0m2.201s 00:15:15.782 20:06:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:15.782 ************************************ 00:15:15.782 END TEST lvs_grow_clean 00:15:15.782 ************************************ 00:15:15.782 20:06:57 -- common/autotest_common.sh@10 -- # set +x 00:15:15.782 20:06:57 -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:15.782 20:06:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:15.782 20:06:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:15.782 20:06:57 -- common/autotest_common.sh@10 -- # set +x 00:15:15.782 ************************************ 00:15:15.782 START TEST lvs_grow_dirty 00:15:15.782 ************************************ 00:15:15.782 20:06:57 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:15:15.782 20:06:57 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:15.782 20:06:57 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:15.782 20:06:57 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:15.782 20:06:57 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:15.782 20:06:57 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:15.782 20:06:57 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:15.782 20:06:57 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:15.782 20:06:57 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:15.782 20:06:57 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:16.040 20:06:58 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:16.040 20:06:58 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:16.298 20:06:58 -- target/nvmf_lvs_grow.sh@28 -- # lvs=97ef0757-8410-43ec-9df4-ddda2cb289d2 00:15:16.298 20:06:58 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97ef0757-8410-43ec-9df4-ddda2cb289d2 00:15:16.298 20:06:58 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:16.556 20:06:58 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:16.556 20:06:58 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:16.556 20:06:58 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 97ef0757-8410-43ec-9df4-ddda2cb289d2 lvol 150 00:15:16.814 20:06:58 -- target/nvmf_lvs_grow.sh@33 -- # lvol=27d50918-2487-4c2b-978d-b8eeddbee025 00:15:16.814 20:06:58 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:16.814 20:06:58 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:16.814 [2024-04-24 20:06:59.033467] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:16.814 [2024-04-24 20:06:59.033566] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:16.814 true 00:15:16.814 20:06:59 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97ef0757-8410-43ec-9df4-ddda2cb289d2 00:15:16.814 20:06:59 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:17.072 20:06:59 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:17.072 20:06:59 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:17.331 20:06:59 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 27d50918-2487-4c2b-978d-b8eeddbee025 00:15:17.589 20:06:59 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:17.847 [2024-04-24 20:06:59.924235] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.847 20:06:59 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:18.106 20:07:00 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:18.106 20:07:00 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66274 00:15:18.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:18.106 20:07:00 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:18.106 20:07:00 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66274 /var/tmp/bdevperf.sock 00:15:18.106 20:07:00 -- common/autotest_common.sh@817 -- # '[' -z 66274 ']' 00:15:18.106 20:07:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:18.106 20:07:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:18.106 20:07:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:18.106 20:07:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:18.106 20:07:00 -- common/autotest_common.sh@10 -- # set +x 00:15:18.106 [2024-04-24 20:07:00.225281] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:18.106 [2024-04-24 20:07:00.225561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66274 ] 00:15:18.363 [2024-04-24 20:07:00.371146] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.363 [2024-04-24 20:07:00.473754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.929 20:07:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:18.929 20:07:01 -- common/autotest_common.sh@850 -- # return 0 00:15:18.929 20:07:01 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:19.187 Nvme0n1 00:15:19.187 20:07:01 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:19.446 [ 00:15:19.446 { 00:15:19.446 "name": "Nvme0n1", 00:15:19.446 "aliases": [ 00:15:19.446 "27d50918-2487-4c2b-978d-b8eeddbee025" 00:15:19.446 ], 00:15:19.446 "product_name": "NVMe disk", 00:15:19.446 "block_size": 4096, 00:15:19.446 "num_blocks": 38912, 00:15:19.446 "uuid": "27d50918-2487-4c2b-978d-b8eeddbee025", 00:15:19.446 "assigned_rate_limits": { 00:15:19.446 "rw_ios_per_sec": 0, 00:15:19.446 "rw_mbytes_per_sec": 0, 00:15:19.446 "r_mbytes_per_sec": 0, 00:15:19.446 "w_mbytes_per_sec": 0 00:15:19.446 }, 00:15:19.446 "claimed": false, 00:15:19.446 "zoned": false, 00:15:19.446 "supported_io_types": { 00:15:19.446 "read": true, 00:15:19.446 "write": true, 00:15:19.446 "unmap": true, 00:15:19.446 "write_zeroes": true, 00:15:19.446 "flush": true, 00:15:19.446 "reset": true, 00:15:19.446 "compare": true, 00:15:19.446 "compare_and_write": true, 00:15:19.446 "abort": true, 00:15:19.446 "nvme_admin": true, 00:15:19.446 "nvme_io": true 00:15:19.446 }, 00:15:19.446 "memory_domains": [ 00:15:19.446 { 00:15:19.446 "dma_device_id": "system", 00:15:19.446 "dma_device_type": 1 00:15:19.446 } 00:15:19.446 ], 00:15:19.446 "driver_specific": { 00:15:19.446 "nvme": [ 00:15:19.446 { 00:15:19.446 "trid": { 00:15:19.446 "trtype": "TCP", 00:15:19.446 "adrfam": "IPv4", 00:15:19.446 "traddr": "10.0.0.2", 00:15:19.446 "trsvcid": "4420", 00:15:19.446 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:19.446 }, 00:15:19.446 "ctrlr_data": { 00:15:19.446 "cntlid": 1, 00:15:19.446 "vendor_id": "0x8086", 00:15:19.446 "model_number": "SPDK bdev Controller", 00:15:19.446 "serial_number": "SPDK0", 00:15:19.446 "firmware_revision": "24.05", 00:15:19.446 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:19.446 "oacs": { 00:15:19.446 "security": 0, 00:15:19.446 "format": 0, 00:15:19.446 "firmware": 0, 00:15:19.446 "ns_manage": 0 00:15:19.446 }, 00:15:19.446 "multi_ctrlr": true, 00:15:19.446 "ana_reporting": false 00:15:19.446 }, 00:15:19.446 "vs": { 00:15:19.446 "nvme_version": "1.3" 00:15:19.446 }, 00:15:19.446 "ns_data": { 00:15:19.446 "id": 1, 00:15:19.446 "can_share": true 00:15:19.446 } 00:15:19.446 } 00:15:19.446 ], 00:15:19.446 "mp_policy": "active_passive" 00:15:19.446 } 00:15:19.446 } 00:15:19.446 ] 00:15:19.446 20:07:01 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66292 00:15:19.446 20:07:01 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:19.446 20:07:01 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:19.705 Running I/O for 10 seconds... 00:15:20.639 Latency(us) 00:15:20.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:20.639 Nvme0n1 : 1.00 9017.00 35.22 0.00 0.00 0.00 0.00 0.00 00:15:20.639 =================================================================================================================== 00:15:20.639 Total : 9017.00 35.22 0.00 0.00 0.00 0.00 0.00 00:15:20.639 00:15:21.575 20:07:03 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 97ef0757-8410-43ec-9df4-ddda2cb289d2 00:15:21.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.575 Nvme0n1 : 2.00 8826.50 34.48 0.00 0.00 0.00 0.00 0.00 00:15:21.575 =================================================================================================================== 00:15:21.575 Total : 8826.50 34.48 0.00 0.00 0.00 0.00 0.00 00:15:21.575 00:15:21.833 true 00:15:21.833 20:07:03 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97ef0757-8410-43ec-9df4-ddda2cb289d2 00:15:21.833 20:07:03 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:22.092 20:07:04 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:22.092 20:07:04 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:22.092 20:07:04 -- target/nvmf_lvs_grow.sh@65 -- # wait 66292 00:15:22.659 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.659 Nvme0n1 : 3.00 8805.33 34.40 0.00 0.00 0.00 0.00 0.00 00:15:22.659 =================================================================================================================== 00:15:22.659 Total : 8805.33 34.40 0.00 0.00 0.00 0.00 0.00 00:15:22.659 00:15:23.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.595 Nvme0n1 : 4.00 8794.75 34.35 0.00 0.00 0.00 0.00 0.00 00:15:23.595 =================================================================================================================== 00:15:23.595 Total : 8794.75 34.35 0.00 0.00 0.00 0.00 0.00 00:15:23.595 00:15:24.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.626 Nvme0n1 : 5.00 8737.60 34.13 0.00 0.00 0.00 0.00 0.00 00:15:24.626 =================================================================================================================== 00:15:24.626 Total : 8737.60 34.13 0.00 0.00 0.00 0.00 0.00 00:15:24.626 00:15:25.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.563 Nvme0n1 : 6.00 8699.50 33.98 0.00 0.00 0.00 0.00 0.00 00:15:25.563 =================================================================================================================== 00:15:25.563 Total : 8699.50 33.98 0.00 0.00 0.00 0.00 0.00 00:15:25.563 00:15:26.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:26.500 Nvme0n1 : 7.00 8654.14 33.81 0.00 0.00 0.00 0.00 0.00 00:15:26.500 =================================================================================================================== 00:15:26.500 Total : 8654.14 33.81 0.00 0.00 0.00 0.00 0.00 00:15:26.500 00:15:27.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:27.439 Nvme0n1 : 8.00 8049.50 31.44 0.00 0.00 0.00 0.00 0.00 00:15:27.439 =================================================================================================================== 00:15:27.439 Total : 8049.50 31.44 0.00 0.00 0.00 0.00 0.00 00:15:27.439 00:15:28.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:28.818 Nvme0n1 : 9.00 8072.33 31.53 0.00 0.00 0.00 0.00 0.00 00:15:28.818 =================================================================================================================== 00:15:28.818 Total : 8072.33 31.53 0.00 0.00 0.00 0.00 0.00 00:15:28.818 00:15:29.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:29.756 Nvme0n1 : 10.00 8090.50 31.60 0.00 0.00 0.00 0.00 0.00 00:15:29.756 =================================================================================================================== 00:15:29.756 Total : 8090.50 31.60 0.00 0.00 0.00 0.00 0.00 00:15:29.756 00:15:29.756 00:15:29.756 Latency(us) 00:15:29.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:29.756 Nvme0n1 : 10.01 8098.06 31.63 0.00 0.00 15801.21 9844.71 567787.88 00:15:29.756 =================================================================================================================== 00:15:29.756 Total : 8098.06 31.63 0.00 0.00 15801.21 9844.71 567787.88 00:15:29.756 0 00:15:29.756 20:07:11 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66274 00:15:29.756 20:07:11 -- common/autotest_common.sh@936 -- # '[' -z 66274 ']' 00:15:29.756 20:07:11 -- common/autotest_common.sh@940 -- # kill -0 66274 00:15:29.756 20:07:11 -- common/autotest_common.sh@941 -- # uname 00:15:29.756 20:07:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:29.756 20:07:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66274 00:15:29.756 20:07:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:29.756 20:07:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:29.756 20:07:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66274' 00:15:29.756 killing process with pid 66274 00:15:29.756 20:07:11 -- common/autotest_common.sh@955 -- # kill 66274 00:15:29.756 Received shutdown signal, test time was about 10.000000 seconds 00:15:29.756 00:15:29.756 Latency(us) 00:15:29.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.756 =================================================================================================================== 00:15:29.756 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:29.756 20:07:11 -- common/autotest_common.sh@960 -- # wait 66274 00:15:29.756 20:07:11 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:30.016 20:07:12 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:30.275 20:07:12 -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97ef0757-8410-43ec-9df4-ddda2cb289d2 00:15:30.275 20:07:12 -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:30.535 20:07:12 -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:30.535 20:07:12 -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:30.535 20:07:12 -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65925 00:15:30.535 20:07:12 -- target/nvmf_lvs_grow.sh@75 -- # wait 65925 00:15:30.535 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65925 Killed "${NVMF_APP[@]}" "$@" 00:15:30.535 20:07:12 -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:30.535 20:07:12 -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:30.535 20:07:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:30.535 20:07:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:30.535 20:07:12 -- common/autotest_common.sh@10 -- # set +x 00:15:30.535 20:07:12 -- nvmf/common.sh@470 -- # nvmfpid=66430 00:15:30.535 20:07:12 -- nvmf/common.sh@471 -- # waitforlisten 66430 00:15:30.535 20:07:12 -- common/autotest_common.sh@817 -- # '[' -z 66430 ']' 00:15:30.535 20:07:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.535 20:07:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:30.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.535 20:07:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.535 20:07:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:30.535 20:07:12 -- common/autotest_common.sh@10 -- # set +x 00:15:30.535 20:07:12 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:30.535 [2024-04-24 20:07:12.736138] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:30.535 [2024-04-24 20:07:12.736217] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.793 [2024-04-24 20:07:12.875232] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.793 [2024-04-24 20:07:12.978011] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.794 [2024-04-24 20:07:12.978070] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.794 [2024-04-24 20:07:12.978079] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.794 [2024-04-24 20:07:12.978095] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.794 [2024-04-24 20:07:12.978100] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.794 [2024-04-24 20:07:12.978128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.729 20:07:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:31.729 20:07:13 -- common/autotest_common.sh@850 -- # return 0 00:15:31.729 20:07:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:31.729 20:07:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:31.729 20:07:13 -- common/autotest_common.sh@10 -- # set +x 00:15:31.729 20:07:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.729 20:07:13 -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:31.729 [2024-04-24 20:07:13.873779] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:31.729 [2024-04-24 20:07:13.874123] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:31.729 [2024-04-24 20:07:13.874290] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:31.729 20:07:13 -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:31.729 20:07:13 -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 27d50918-2487-4c2b-978d-b8eeddbee025 00:15:31.729 20:07:13 -- common/autotest_common.sh@885 -- # local bdev_name=27d50918-2487-4c2b-978d-b8eeddbee025 00:15:31.729 20:07:13 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:31.729 20:07:13 -- common/autotest_common.sh@887 -- # local i 00:15:31.729 20:07:13 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:31.729 20:07:13 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:31.729 20:07:13 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:31.988 20:07:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27d50918-2487-4c2b-978d-b8eeddbee025 -t 2000 00:15:32.246 [ 00:15:32.246 { 00:15:32.246 "name": "27d50918-2487-4c2b-978d-b8eeddbee025", 00:15:32.246 "aliases": [ 00:15:32.246 "lvs/lvol" 00:15:32.246 ], 00:15:32.246 "product_name": "Logical Volume", 00:15:32.246 "block_size": 4096, 00:15:32.246 "num_blocks": 38912, 00:15:32.246 "uuid": "27d50918-2487-4c2b-978d-b8eeddbee025", 00:15:32.246 "assigned_rate_limits": { 00:15:32.246 "rw_ios_per_sec": 0, 00:15:32.246 "rw_mbytes_per_sec": 0, 00:15:32.246 "r_mbytes_per_sec": 0, 00:15:32.246 "w_mbytes_per_sec": 0 00:15:32.246 }, 00:15:32.246 "claimed": false, 00:15:32.246 "zoned": false, 00:15:32.246 "supported_io_types": { 00:15:32.246 "read": true, 00:15:32.246 "write": true, 00:15:32.246 "unmap": true, 00:15:32.246 "write_zeroes": true, 00:15:32.246 "flush": false, 00:15:32.246 "reset": true, 00:15:32.246 "compare": false, 00:15:32.246 "compare_and_write": false, 00:15:32.246 "abort": false, 00:15:32.246 "nvme_admin": false, 00:15:32.246 "nvme_io": false 00:15:32.246 }, 00:15:32.246 "driver_specific": { 00:15:32.246 "lvol": { 00:15:32.246 "lvol_store_uuid": "97ef0757-8410-43ec-9df4-ddda2cb289d2", 00:15:32.246 "base_bdev": "aio_bdev", 00:15:32.246 "thin_provision": false, 00:15:32.246 "snapshot": false, 00:15:32.246 "clone": false, 00:15:32.246 "esnap_clone": false 00:15:32.246 } 00:15:32.246 } 00:15:32.246 } 00:15:32.246 ] 00:15:32.246 20:07:14 -- common/autotest_common.sh@893 -- # return 0 00:15:32.246 20:07:14 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97ef0757-8410-43ec-9df4-ddda2cb289d2 00:15:32.246 20:07:14 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:32.504 20:07:14 -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:32.504 20:07:14 -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:32.504 20:07:14 -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97ef0757-8410-43ec-9df4-ddda2cb289d2 00:15:32.762 20:07:14 -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:32.762 20:07:14 -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:32.762 [2024-04-24 20:07:15.001147] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:33.021 20:07:15 -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97ef0757-8410-43ec-9df4-ddda2cb289d2 00:15:33.021 20:07:15 -- common/autotest_common.sh@638 -- # local es=0 00:15:33.021 20:07:15 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97ef0757-8410-43ec-9df4-ddda2cb289d2 00:15:33.021 20:07:15 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:33.021 20:07:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:33.021 20:07:15 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:33.021 20:07:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:33.021 20:07:15 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:33.021 20:07:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:33.021 20:07:15 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:33.021 20:07:15 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:33.021 20:07:15 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97ef0757-8410-43ec-9df4-ddda2cb289d2 00:15:33.281 request: 00:15:33.281 { 00:15:33.281 "uuid": "97ef0757-8410-43ec-9df4-ddda2cb289d2", 00:15:33.281 "method": "bdev_lvol_get_lvstores", 00:15:33.281 "req_id": 1 00:15:33.281 } 00:15:33.281 Got JSON-RPC error response 00:15:33.281 response: 00:15:33.281 { 00:15:33.281 "code": -19, 00:15:33.281 "message": "No such device" 00:15:33.281 } 00:15:33.281 20:07:15 -- common/autotest_common.sh@641 -- # es=1 00:15:33.281 20:07:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:33.281 20:07:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:33.281 20:07:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:33.281 20:07:15 -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:33.281 aio_bdev 00:15:33.281 20:07:15 -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 27d50918-2487-4c2b-978d-b8eeddbee025 00:15:33.281 20:07:15 -- common/autotest_common.sh@885 -- # local bdev_name=27d50918-2487-4c2b-978d-b8eeddbee025 00:15:33.281 20:07:15 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:33.281 20:07:15 -- common/autotest_common.sh@887 -- # local i 00:15:33.281 20:07:15 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:33.281 20:07:15 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:33.281 20:07:15 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:33.540 20:07:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27d50918-2487-4c2b-978d-b8eeddbee025 -t 2000 00:15:33.800 [ 00:15:33.800 { 00:15:33.800 "name": "27d50918-2487-4c2b-978d-b8eeddbee025", 00:15:33.800 "aliases": [ 00:15:33.800 "lvs/lvol" 00:15:33.800 ], 00:15:33.800 "product_name": "Logical Volume", 00:15:33.800 "block_size": 4096, 00:15:33.800 "num_blocks": 38912, 00:15:33.800 "uuid": "27d50918-2487-4c2b-978d-b8eeddbee025", 00:15:33.800 "assigned_rate_limits": { 00:15:33.800 "rw_ios_per_sec": 0, 00:15:33.800 "rw_mbytes_per_sec": 0, 00:15:33.800 "r_mbytes_per_sec": 0, 00:15:33.800 "w_mbytes_per_sec": 0 00:15:33.800 }, 00:15:33.800 "claimed": false, 00:15:33.800 "zoned": false, 00:15:33.800 "supported_io_types": { 00:15:33.800 "read": true, 00:15:33.800 "write": true, 00:15:33.800 "unmap": true, 00:15:33.800 "write_zeroes": true, 00:15:33.800 "flush": false, 00:15:33.800 "reset": true, 00:15:33.800 "compare": false, 00:15:33.800 "compare_and_write": false, 00:15:33.800 "abort": false, 00:15:33.800 "nvme_admin": false, 00:15:33.800 "nvme_io": false 00:15:33.800 }, 00:15:33.800 "driver_specific": { 00:15:33.800 "lvol": { 00:15:33.800 "lvol_store_uuid": "97ef0757-8410-43ec-9df4-ddda2cb289d2", 00:15:33.800 "base_bdev": "aio_bdev", 00:15:33.800 "thin_provision": false, 00:15:33.800 "snapshot": false, 00:15:33.800 "clone": false, 00:15:33.800 "esnap_clone": false 00:15:33.800 } 00:15:33.800 } 00:15:33.800 } 00:15:33.800 ] 00:15:33.800 20:07:15 -- common/autotest_common.sh@893 -- # return 0 00:15:33.800 20:07:15 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:33.800 20:07:15 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97ef0757-8410-43ec-9df4-ddda2cb289d2 00:15:34.058 20:07:16 -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:34.058 20:07:16 -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 97ef0757-8410-43ec-9df4-ddda2cb289d2 00:15:34.058 20:07:16 -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:34.317 20:07:16 -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:34.317 20:07:16 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 27d50918-2487-4c2b-978d-b8eeddbee025 00:15:34.576 20:07:16 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 97ef0757-8410-43ec-9df4-ddda2cb289d2 00:15:34.835 20:07:16 -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:34.835 20:07:17 -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:35.404 ************************************ 00:15:35.404 END TEST lvs_grow_dirty 00:15:35.404 ************************************ 00:15:35.404 00:15:35.404 real 0m19.407s 00:15:35.404 user 0m41.162s 00:15:35.404 sys 0m6.952s 00:15:35.404 20:07:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:35.404 20:07:17 -- common/autotest_common.sh@10 -- # set +x 00:15:35.404 20:07:17 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:35.404 20:07:17 -- common/autotest_common.sh@794 -- # type=--id 00:15:35.404 20:07:17 -- common/autotest_common.sh@795 -- # id=0 00:15:35.404 20:07:17 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:35.404 20:07:17 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:35.404 20:07:17 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:35.404 20:07:17 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:35.404 20:07:17 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:35.404 20:07:17 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:35.404 nvmf_trace.0 00:15:35.404 20:07:17 -- common/autotest_common.sh@809 -- # return 0 00:15:35.404 20:07:17 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:35.404 20:07:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:35.404 20:07:17 -- nvmf/common.sh@117 -- # sync 00:15:35.404 20:07:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:35.404 20:07:17 -- nvmf/common.sh@120 -- # set +e 00:15:35.404 20:07:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:35.404 20:07:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:35.404 rmmod nvme_tcp 00:15:35.404 rmmod nvme_fabrics 00:15:35.404 rmmod nvme_keyring 00:15:35.404 20:07:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:35.404 20:07:17 -- nvmf/common.sh@124 -- # set -e 00:15:35.404 20:07:17 -- nvmf/common.sh@125 -- # return 0 00:15:35.404 20:07:17 -- nvmf/common.sh@478 -- # '[' -n 66430 ']' 00:15:35.404 20:07:17 -- nvmf/common.sh@479 -- # killprocess 66430 00:15:35.404 20:07:17 -- common/autotest_common.sh@936 -- # '[' -z 66430 ']' 00:15:35.404 20:07:17 -- common/autotest_common.sh@940 -- # kill -0 66430 00:15:35.404 20:07:17 -- common/autotest_common.sh@941 -- # uname 00:15:35.404 20:07:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:35.404 20:07:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66430 00:15:35.663 killing process with pid 66430 00:15:35.663 20:07:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:35.663 20:07:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:35.663 20:07:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66430' 00:15:35.663 20:07:17 -- common/autotest_common.sh@955 -- # kill 66430 00:15:35.663 20:07:17 -- common/autotest_common.sh@960 -- # wait 66430 00:15:35.922 20:07:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:35.922 20:07:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:35.922 20:07:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:35.922 20:07:17 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:35.922 20:07:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:35.922 20:07:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.922 20:07:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.922 20:07:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.922 20:07:17 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:35.922 00:15:35.922 real 0m39.152s 00:15:35.922 user 1m3.148s 00:15:35.922 sys 0m10.029s 00:15:35.922 20:07:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:35.922 20:07:18 -- common/autotest_common.sh@10 -- # set +x 00:15:35.922 ************************************ 00:15:35.922 END TEST nvmf_lvs_grow 00:15:35.922 ************************************ 00:15:35.922 20:07:18 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:35.922 20:07:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:35.922 20:07:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:35.922 20:07:18 -- common/autotest_common.sh@10 -- # set +x 00:15:35.922 ************************************ 00:15:35.922 START TEST nvmf_bdev_io_wait 00:15:35.922 ************************************ 00:15:35.922 20:07:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:36.182 * Looking for test storage... 00:15:36.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:36.182 20:07:18 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:36.182 20:07:18 -- nvmf/common.sh@7 -- # uname -s 00:15:36.182 20:07:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.182 20:07:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.182 20:07:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.182 20:07:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.182 20:07:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.182 20:07:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.182 20:07:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.182 20:07:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.182 20:07:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.182 20:07:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.183 20:07:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:15:36.183 20:07:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:15:36.183 20:07:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.183 20:07:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.183 20:07:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:36.183 20:07:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.183 20:07:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:36.183 20:07:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.183 20:07:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.183 20:07:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.183 20:07:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.183 20:07:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.183 20:07:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.183 20:07:18 -- paths/export.sh@5 -- # export PATH 00:15:36.183 20:07:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.183 20:07:18 -- nvmf/common.sh@47 -- # : 0 00:15:36.183 20:07:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.183 20:07:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.183 20:07:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.183 20:07:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.183 20:07:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.183 20:07:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.183 20:07:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.183 20:07:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.183 20:07:18 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:36.183 20:07:18 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:36.183 20:07:18 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:36.183 20:07:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:36.183 20:07:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.183 20:07:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:36.183 20:07:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:36.183 20:07:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:36.183 20:07:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.183 20:07:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.183 20:07:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.183 20:07:18 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:36.183 20:07:18 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:36.183 20:07:18 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:36.183 20:07:18 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:36.183 20:07:18 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:36.183 20:07:18 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:36.183 20:07:18 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.183 20:07:18 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.183 20:07:18 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:36.183 20:07:18 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:36.183 20:07:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:36.183 20:07:18 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:36.183 20:07:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:36.183 20:07:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.183 20:07:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:36.183 20:07:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:36.183 20:07:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:36.183 20:07:18 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:36.183 20:07:18 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:36.183 20:07:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:36.183 Cannot find device "nvmf_tgt_br" 00:15:36.183 20:07:18 -- nvmf/common.sh@155 -- # true 00:15:36.183 20:07:18 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:36.183 Cannot find device "nvmf_tgt_br2" 00:15:36.183 20:07:18 -- nvmf/common.sh@156 -- # true 00:15:36.183 20:07:18 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:36.183 20:07:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:36.183 Cannot find device "nvmf_tgt_br" 00:15:36.183 20:07:18 -- nvmf/common.sh@158 -- # true 00:15:36.183 20:07:18 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:36.443 Cannot find device "nvmf_tgt_br2" 00:15:36.443 20:07:18 -- nvmf/common.sh@159 -- # true 00:15:36.443 20:07:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:36.443 20:07:18 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:36.443 20:07:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:36.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.443 20:07:18 -- nvmf/common.sh@162 -- # true 00:15:36.443 20:07:18 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:36.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.443 20:07:18 -- nvmf/common.sh@163 -- # true 00:15:36.443 20:07:18 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:36.443 20:07:18 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:36.443 20:07:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:36.443 20:07:18 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:36.443 20:07:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:36.443 20:07:18 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:36.443 20:07:18 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:36.443 20:07:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:36.443 20:07:18 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:36.443 20:07:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:36.443 20:07:18 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:36.443 20:07:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:36.443 20:07:18 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:36.443 20:07:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:36.443 20:07:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:36.443 20:07:18 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:36.443 20:07:18 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:36.443 20:07:18 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:36.443 20:07:18 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:36.443 20:07:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:36.443 20:07:18 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:36.443 20:07:18 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:36.444 20:07:18 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:36.444 20:07:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:36.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:15:36.444 00:15:36.444 --- 10.0.0.2 ping statistics --- 00:15:36.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.444 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:36.444 20:07:18 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:36.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:36.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:15:36.444 00:15:36.444 --- 10.0.0.3 ping statistics --- 00:15:36.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.444 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:36.444 20:07:18 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:36.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:36.444 00:15:36.444 --- 10.0.0.1 ping statistics --- 00:15:36.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.444 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:36.444 20:07:18 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.444 20:07:18 -- nvmf/common.sh@422 -- # return 0 00:15:36.444 20:07:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:36.444 20:07:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.444 20:07:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:36.444 20:07:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:36.444 20:07:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.444 20:07:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:36.444 20:07:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:36.444 20:07:18 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:36.444 20:07:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:36.444 20:07:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:36.444 20:07:18 -- common/autotest_common.sh@10 -- # set +x 00:15:36.444 20:07:18 -- nvmf/common.sh@470 -- # nvmfpid=66741 00:15:36.444 20:07:18 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:36.444 20:07:18 -- nvmf/common.sh@471 -- # waitforlisten 66741 00:15:36.444 20:07:18 -- common/autotest_common.sh@817 -- # '[' -z 66741 ']' 00:15:36.444 20:07:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.444 20:07:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:36.444 20:07:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.444 20:07:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:36.444 20:07:18 -- common/autotest_common.sh@10 -- # set +x 00:15:36.703 [2024-04-24 20:07:18.737293] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:36.703 [2024-04-24 20:07:18.737479] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.703 [2024-04-24 20:07:18.878603] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.963 [2024-04-24 20:07:18.985448] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.963 [2024-04-24 20:07:18.985588] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.963 [2024-04-24 20:07:18.985628] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.963 [2024-04-24 20:07:18.985658] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.963 [2024-04-24 20:07:18.985676] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.963 [2024-04-24 20:07:18.985989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.963 [2024-04-24 20:07:18.986226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.963 [2024-04-24 20:07:18.986131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.963 [2024-04-24 20:07:18.986228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:37.531 20:07:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:37.531 20:07:19 -- common/autotest_common.sh@850 -- # return 0 00:15:37.531 20:07:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:37.531 20:07:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:37.531 20:07:19 -- common/autotest_common.sh@10 -- # set +x 00:15:37.531 20:07:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.531 20:07:19 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:37.531 20:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.531 20:07:19 -- common/autotest_common.sh@10 -- # set +x 00:15:37.531 20:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.531 20:07:19 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:37.531 20:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.531 20:07:19 -- common/autotest_common.sh@10 -- # set +x 00:15:37.531 20:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.531 20:07:19 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:37.531 20:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.531 20:07:19 -- common/autotest_common.sh@10 -- # set +x 00:15:37.531 [2024-04-24 20:07:19.743368] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.531 20:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.531 20:07:19 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:37.531 20:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.531 20:07:19 -- common/autotest_common.sh@10 -- # set +x 00:15:37.791 Malloc0 00:15:37.791 20:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.791 20:07:19 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:37.791 20:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.791 20:07:19 -- common/autotest_common.sh@10 -- # set +x 00:15:37.791 20:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.791 20:07:19 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:37.791 20:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.791 20:07:19 -- common/autotest_common.sh@10 -- # set +x 00:15:37.791 20:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.791 20:07:19 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.791 20:07:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.791 20:07:19 -- common/autotest_common.sh@10 -- # set +x 00:15:37.791 [2024-04-24 20:07:19.811615] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:37.791 [2024-04-24 20:07:19.811856] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.791 20:07:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.791 20:07:19 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66778 00:15:37.791 20:07:19 -- target/bdev_io_wait.sh@30 -- # READ_PID=66779 00:15:37.791 20:07:19 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66781 00:15:37.791 20:07:19 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:37.791 20:07:19 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:37.791 20:07:19 -- nvmf/common.sh@521 -- # config=() 00:15:37.791 20:07:19 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:37.791 20:07:19 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:37.791 20:07:19 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:37.791 20:07:19 -- nvmf/common.sh@521 -- # local subsystem config 00:15:37.791 20:07:19 -- nvmf/common.sh@521 -- # config=() 00:15:37.791 20:07:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:37.791 20:07:19 -- nvmf/common.sh@521 -- # local subsystem config 00:15:37.791 20:07:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:37.792 20:07:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:37.792 { 00:15:37.792 "params": { 00:15:37.792 "name": "Nvme$subsystem", 00:15:37.792 "trtype": "$TEST_TRANSPORT", 00:15:37.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:37.792 "adrfam": "ipv4", 00:15:37.792 "trsvcid": "$NVMF_PORT", 00:15:37.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:37.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:37.792 "hdgst": ${hdgst:-false}, 00:15:37.792 "ddgst": ${ddgst:-false} 00:15:37.792 }, 00:15:37.792 "method": "bdev_nvme_attach_controller" 00:15:37.792 } 00:15:37.792 EOF 00:15:37.792 )") 00:15:37.792 20:07:19 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:37.792 20:07:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:37.792 { 00:15:37.792 "params": { 00:15:37.792 "name": "Nvme$subsystem", 00:15:37.792 "trtype": "$TEST_TRANSPORT", 00:15:37.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:37.792 "adrfam": "ipv4", 00:15:37.792 "trsvcid": "$NVMF_PORT", 00:15:37.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:37.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:37.792 "hdgst": ${hdgst:-false}, 00:15:37.792 "ddgst": ${ddgst:-false} 00:15:37.792 }, 00:15:37.792 "method": "bdev_nvme_attach_controller" 00:15:37.792 } 00:15:37.792 EOF 00:15:37.792 )") 00:15:37.792 20:07:19 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:37.792 20:07:19 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:37.792 20:07:19 -- nvmf/common.sh@521 -- # config=() 00:15:37.792 20:07:19 -- nvmf/common.sh@521 -- # config=() 00:15:37.792 20:07:19 -- nvmf/common.sh@521 -- # local subsystem config 00:15:37.792 20:07:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:37.792 20:07:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:37.792 { 00:15:37.792 "params": { 00:15:37.792 "name": "Nvme$subsystem", 00:15:37.792 "trtype": "$TEST_TRANSPORT", 00:15:37.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:37.792 "adrfam": "ipv4", 00:15:37.792 "trsvcid": "$NVMF_PORT", 00:15:37.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:37.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:37.792 "hdgst": ${hdgst:-false}, 00:15:37.792 "ddgst": ${ddgst:-false} 00:15:37.792 }, 00:15:37.792 "method": "bdev_nvme_attach_controller" 00:15:37.792 } 00:15:37.792 EOF 00:15:37.792 )") 00:15:37.792 20:07:19 -- nvmf/common.sh@521 -- # local subsystem config 00:15:37.792 20:07:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:37.792 20:07:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:37.792 { 00:15:37.792 "params": { 00:15:37.792 "name": "Nvme$subsystem", 00:15:37.792 "trtype": "$TEST_TRANSPORT", 00:15:37.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:37.792 "adrfam": "ipv4", 00:15:37.792 "trsvcid": "$NVMF_PORT", 00:15:37.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:37.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:37.792 "hdgst": ${hdgst:-false}, 00:15:37.792 "ddgst": ${ddgst:-false} 00:15:37.792 }, 00:15:37.792 "method": "bdev_nvme_attach_controller" 00:15:37.792 } 00:15:37.792 EOF 00:15:37.792 )") 00:15:37.792 20:07:19 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66783 00:15:37.792 20:07:19 -- nvmf/common.sh@543 -- # cat 00:15:37.792 20:07:19 -- target/bdev_io_wait.sh@35 -- # sync 00:15:37.792 20:07:19 -- nvmf/common.sh@543 -- # cat 00:15:37.792 20:07:19 -- nvmf/common.sh@543 -- # cat 00:15:37.792 20:07:19 -- nvmf/common.sh@543 -- # cat 00:15:37.792 20:07:19 -- nvmf/common.sh@545 -- # jq . 00:15:37.792 20:07:19 -- nvmf/common.sh@545 -- # jq . 00:15:37.792 20:07:19 -- nvmf/common.sh@545 -- # jq . 00:15:37.792 20:07:19 -- nvmf/common.sh@546 -- # IFS=, 00:15:37.792 20:07:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:37.792 "params": { 00:15:37.792 "name": "Nvme1", 00:15:37.792 "trtype": "tcp", 00:15:37.792 "traddr": "10.0.0.2", 00:15:37.792 "adrfam": "ipv4", 00:15:37.792 "trsvcid": "4420", 00:15:37.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:37.792 "hdgst": false, 00:15:37.792 "ddgst": false 00:15:37.792 }, 00:15:37.792 "method": "bdev_nvme_attach_controller" 00:15:37.792 }' 00:15:37.792 20:07:19 -- nvmf/common.sh@545 -- # jq . 00:15:37.792 20:07:19 -- nvmf/common.sh@546 -- # IFS=, 00:15:37.792 20:07:19 -- nvmf/common.sh@546 -- # IFS=, 00:15:37.792 20:07:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:37.792 "params": { 00:15:37.792 "name": "Nvme1", 00:15:37.792 "trtype": "tcp", 00:15:37.792 "traddr": "10.0.0.2", 00:15:37.792 "adrfam": "ipv4", 00:15:37.792 "trsvcid": "4420", 00:15:37.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:37.792 "hdgst": false, 00:15:37.792 "ddgst": false 00:15:37.792 }, 00:15:37.792 "method": "bdev_nvme_attach_controller" 00:15:37.792 }' 00:15:37.792 20:07:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:37.792 "params": { 00:15:37.792 "name": "Nvme1", 00:15:37.792 "trtype": "tcp", 00:15:37.792 "traddr": "10.0.0.2", 00:15:37.792 "adrfam": "ipv4", 00:15:37.792 "trsvcid": "4420", 00:15:37.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:37.792 "hdgst": false, 00:15:37.792 "ddgst": false 00:15:37.792 }, 00:15:37.792 "method": "bdev_nvme_attach_controller" 00:15:37.792 }' 00:15:37.792 20:07:19 -- nvmf/common.sh@546 -- # IFS=, 00:15:37.792 20:07:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:37.792 "params": { 00:15:37.792 "name": "Nvme1", 00:15:37.792 "trtype": "tcp", 00:15:37.792 "traddr": "10.0.0.2", 00:15:37.792 "adrfam": "ipv4", 00:15:37.792 "trsvcid": "4420", 00:15:37.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:37.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:37.792 "hdgst": false, 00:15:37.792 "ddgst": false 00:15:37.792 }, 00:15:37.792 "method": "bdev_nvme_attach_controller" 00:15:37.792 }' 00:15:37.792 [2024-04-24 20:07:19.869826] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:37.792 [2024-04-24 20:07:19.870002] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:37.792 [2024-04-24 20:07:19.875930] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:37.792 [2024-04-24 20:07:19.876066] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:37.792 [2024-04-24 20:07:19.885890] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:37.792 20:07:19 -- target/bdev_io_wait.sh@37 -- # wait 66778 00:15:37.792 [2024-04-24 20:07:19.889436] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:37.792 [2024-04-24 20:07:19.894623] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:37.792 [2024-04-24 20:07:19.894691] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:38.051 [2024-04-24 20:07:20.060325] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.051 [2024-04-24 20:07:20.130191] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.051 [2024-04-24 20:07:20.152873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:38.051 [2024-04-24 20:07:20.187921] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.051 [2024-04-24 20:07:20.223855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:38.051 [2024-04-24 20:07:20.256618] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.051 [2024-04-24 20:07:20.281023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:38.051 Running I/O for 1 seconds... 00:15:38.311 [2024-04-24 20:07:20.340680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:38.311 Running I/O for 1 seconds... 00:15:38.311 Running I/O for 1 seconds... 00:15:38.311 Running I/O for 1 seconds... 00:15:39.247 00:15:39.247 Latency(us) 00:15:39.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.247 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:39.247 Nvme1n1 : 1.01 8932.63 34.89 0.00 0.00 14257.13 9329.58 20948.63 00:15:39.247 =================================================================================================================== 00:15:39.247 Total : 8932.63 34.89 0.00 0.00 14257.13 9329.58 20948.63 00:15:39.247 00:15:39.247 Latency(us) 00:15:39.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.247 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:39.247 Nvme1n1 : 1.01 8437.05 32.96 0.00 0.00 15102.09 8299.32 27244.66 00:15:39.247 =================================================================================================================== 00:15:39.247 Total : 8437.05 32.96 0.00 0.00 15102.09 8299.32 27244.66 00:15:39.247 00:15:39.247 Latency(us) 00:15:39.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.247 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:39.247 Nvme1n1 : 1.01 8220.66 32.11 0.00 0.00 15507.99 7097.35 30678.86 00:15:39.247 =================================================================================================================== 00:15:39.247 Total : 8220.66 32.11 0.00 0.00 15507.99 7097.35 30678.86 00:15:39.247 00:15:39.247 Latency(us) 00:15:39.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.247 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:39.247 Nvme1n1 : 1.00 187213.54 731.30 0.00 0.00 681.23 282.61 1209.12 00:15:39.247 =================================================================================================================== 00:15:39.247 Total : 187213.54 731.30 0.00 0.00 681.23 282.61 1209.12 00:15:39.506 20:07:21 -- target/bdev_io_wait.sh@38 -- # wait 66779 00:15:39.506 20:07:21 -- target/bdev_io_wait.sh@39 -- # wait 66781 00:15:39.506 20:07:21 -- target/bdev_io_wait.sh@40 -- # wait 66783 00:15:39.506 20:07:21 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:39.506 20:07:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.506 20:07:21 -- common/autotest_common.sh@10 -- # set +x 00:15:39.506 20:07:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.506 20:07:21 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:39.506 20:07:21 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:39.506 20:07:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:39.506 20:07:21 -- nvmf/common.sh@117 -- # sync 00:15:39.765 20:07:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:39.765 20:07:21 -- nvmf/common.sh@120 -- # set +e 00:15:39.765 20:07:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:39.765 20:07:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:39.765 rmmod nvme_tcp 00:15:39.765 rmmod nvme_fabrics 00:15:39.765 rmmod nvme_keyring 00:15:39.765 20:07:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:39.765 20:07:21 -- nvmf/common.sh@124 -- # set -e 00:15:39.765 20:07:21 -- nvmf/common.sh@125 -- # return 0 00:15:39.765 20:07:21 -- nvmf/common.sh@478 -- # '[' -n 66741 ']' 00:15:39.765 20:07:21 -- nvmf/common.sh@479 -- # killprocess 66741 00:15:39.765 20:07:21 -- common/autotest_common.sh@936 -- # '[' -z 66741 ']' 00:15:39.765 20:07:21 -- common/autotest_common.sh@940 -- # kill -0 66741 00:15:39.765 20:07:21 -- common/autotest_common.sh@941 -- # uname 00:15:39.765 20:07:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:39.765 20:07:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66741 00:15:39.765 20:07:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:39.765 20:07:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:39.765 20:07:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66741' 00:15:39.765 killing process with pid 66741 00:15:39.765 20:07:21 -- common/autotest_common.sh@955 -- # kill 66741 00:15:39.765 [2024-04-24 20:07:21.854333] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:39.765 20:07:21 -- common/autotest_common.sh@960 -- # wait 66741 00:15:40.025 20:07:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:40.025 20:07:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:40.025 20:07:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:40.025 20:07:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:40.025 20:07:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:40.025 20:07:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.025 20:07:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.025 20:07:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.025 20:07:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:40.025 00:15:40.025 real 0m3.953s 00:15:40.025 user 0m17.310s 00:15:40.025 sys 0m1.844s 00:15:40.025 20:07:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:40.025 20:07:22 -- common/autotest_common.sh@10 -- # set +x 00:15:40.025 ************************************ 00:15:40.025 END TEST nvmf_bdev_io_wait 00:15:40.025 ************************************ 00:15:40.025 20:07:22 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:40.025 20:07:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:40.025 20:07:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:40.025 20:07:22 -- common/autotest_common.sh@10 -- # set +x 00:15:40.025 ************************************ 00:15:40.025 START TEST nvmf_queue_depth 00:15:40.025 ************************************ 00:15:40.025 20:07:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:40.286 * Looking for test storage... 00:15:40.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:40.286 20:07:22 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:40.286 20:07:22 -- nvmf/common.sh@7 -- # uname -s 00:15:40.286 20:07:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.286 20:07:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.286 20:07:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.286 20:07:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.286 20:07:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.286 20:07:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.286 20:07:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.286 20:07:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.286 20:07:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.286 20:07:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.286 20:07:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:15:40.286 20:07:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:15:40.286 20:07:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.286 20:07:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.286 20:07:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:40.286 20:07:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.286 20:07:22 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:40.286 20:07:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.286 20:07:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.286 20:07:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.286 20:07:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.286 20:07:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.286 20:07:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.286 20:07:22 -- paths/export.sh@5 -- # export PATH 00:15:40.286 20:07:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.286 20:07:22 -- nvmf/common.sh@47 -- # : 0 00:15:40.286 20:07:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:40.286 20:07:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:40.286 20:07:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.286 20:07:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.286 20:07:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.286 20:07:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:40.286 20:07:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:40.286 20:07:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:40.286 20:07:22 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:40.286 20:07:22 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:40.286 20:07:22 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:40.286 20:07:22 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:40.286 20:07:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:40.286 20:07:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.286 20:07:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:40.287 20:07:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:40.287 20:07:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:40.287 20:07:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.287 20:07:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.287 20:07:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.287 20:07:22 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:40.287 20:07:22 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:40.287 20:07:22 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:40.287 20:07:22 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:40.287 20:07:22 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:40.287 20:07:22 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:40.287 20:07:22 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.287 20:07:22 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.287 20:07:22 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:40.287 20:07:22 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:40.287 20:07:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:40.287 20:07:22 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:40.287 20:07:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:40.287 20:07:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.287 20:07:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:40.287 20:07:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:40.287 20:07:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:40.287 20:07:22 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:40.287 20:07:22 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:40.287 20:07:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:40.287 Cannot find device "nvmf_tgt_br" 00:15:40.287 20:07:22 -- nvmf/common.sh@155 -- # true 00:15:40.287 20:07:22 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:40.287 Cannot find device "nvmf_tgt_br2" 00:15:40.287 20:07:22 -- nvmf/common.sh@156 -- # true 00:15:40.287 20:07:22 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:40.287 20:07:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:40.287 Cannot find device "nvmf_tgt_br" 00:15:40.287 20:07:22 -- nvmf/common.sh@158 -- # true 00:15:40.287 20:07:22 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:40.287 Cannot find device "nvmf_tgt_br2" 00:15:40.287 20:07:22 -- nvmf/common.sh@159 -- # true 00:15:40.287 20:07:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:40.547 20:07:22 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:40.547 20:07:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:40.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:40.547 20:07:22 -- nvmf/common.sh@162 -- # true 00:15:40.547 20:07:22 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:40.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:40.547 20:07:22 -- nvmf/common.sh@163 -- # true 00:15:40.547 20:07:22 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:40.547 20:07:22 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:40.547 20:07:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:40.547 20:07:22 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:40.547 20:07:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:40.547 20:07:22 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:40.547 20:07:22 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:40.547 20:07:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:40.547 20:07:22 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:40.547 20:07:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:40.547 20:07:22 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:40.547 20:07:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:40.547 20:07:22 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:40.547 20:07:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:40.547 20:07:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:40.547 20:07:22 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:40.547 20:07:22 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:40.547 20:07:22 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:40.547 20:07:22 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:40.547 20:07:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:40.547 20:07:22 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:40.547 20:07:22 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:40.547 20:07:22 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:40.547 20:07:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:40.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:15:40.547 00:15:40.547 --- 10.0.0.2 ping statistics --- 00:15:40.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.547 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:15:40.547 20:07:22 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:40.547 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:40.547 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.133 ms 00:15:40.547 00:15:40.547 --- 10.0.0.3 ping statistics --- 00:15:40.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.547 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:15:40.547 20:07:22 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:40.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:15:40.547 00:15:40.547 --- 10.0.0.1 ping statistics --- 00:15:40.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.547 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:40.547 20:07:22 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.547 20:07:22 -- nvmf/common.sh@422 -- # return 0 00:15:40.547 20:07:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:40.547 20:07:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.547 20:07:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:40.547 20:07:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:40.547 20:07:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.547 20:07:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:40.547 20:07:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:40.547 20:07:22 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:40.547 20:07:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:40.547 20:07:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:40.547 20:07:22 -- common/autotest_common.sh@10 -- # set +x 00:15:40.547 20:07:22 -- nvmf/common.sh@470 -- # nvmfpid=67023 00:15:40.547 20:07:22 -- nvmf/common.sh@471 -- # waitforlisten 67023 00:15:40.547 20:07:22 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:40.547 20:07:22 -- common/autotest_common.sh@817 -- # '[' -z 67023 ']' 00:15:40.547 20:07:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.547 20:07:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:40.547 20:07:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.547 20:07:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:40.547 20:07:22 -- common/autotest_common.sh@10 -- # set +x 00:15:40.806 [2024-04-24 20:07:22.806758] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:40.806 [2024-04-24 20:07:22.806830] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.806 [2024-04-24 20:07:22.945969] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.806 [2024-04-24 20:07:23.046297] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.806 [2024-04-24 20:07:23.046349] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.806 [2024-04-24 20:07:23.046356] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.806 [2024-04-24 20:07:23.046362] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.806 [2024-04-24 20:07:23.046366] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.806 [2024-04-24 20:07:23.046413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.744 20:07:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:41.744 20:07:23 -- common/autotest_common.sh@850 -- # return 0 00:15:41.744 20:07:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:41.744 20:07:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:41.744 20:07:23 -- common/autotest_common.sh@10 -- # set +x 00:15:41.744 20:07:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.744 20:07:23 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:41.744 20:07:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.744 20:07:23 -- common/autotest_common.sh@10 -- # set +x 00:15:41.744 [2024-04-24 20:07:23.747525] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.744 20:07:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.744 20:07:23 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:41.744 20:07:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.744 20:07:23 -- common/autotest_common.sh@10 -- # set +x 00:15:41.744 Malloc0 00:15:41.744 20:07:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.744 20:07:23 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:41.744 20:07:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.744 20:07:23 -- common/autotest_common.sh@10 -- # set +x 00:15:41.744 20:07:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.744 20:07:23 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:41.744 20:07:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.744 20:07:23 -- common/autotest_common.sh@10 -- # set +x 00:15:41.744 20:07:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.744 20:07:23 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:41.744 20:07:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:41.744 20:07:23 -- common/autotest_common.sh@10 -- # set +x 00:15:41.744 [2024-04-24 20:07:23.814496] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:41.744 [2024-04-24 20:07:23.814737] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.744 20:07:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:41.744 20:07:23 -- target/queue_depth.sh@30 -- # bdevperf_pid=67055 00:15:41.744 20:07:23 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:41.744 20:07:23 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:41.744 20:07:23 -- target/queue_depth.sh@33 -- # waitforlisten 67055 /var/tmp/bdevperf.sock 00:15:41.744 20:07:23 -- common/autotest_common.sh@817 -- # '[' -z 67055 ']' 00:15:41.744 20:07:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:41.744 20:07:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:41.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:41.744 20:07:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:41.744 20:07:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:41.744 20:07:23 -- common/autotest_common.sh@10 -- # set +x 00:15:41.744 [2024-04-24 20:07:23.871843] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:41.744 [2024-04-24 20:07:23.871931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67055 ] 00:15:42.004 [2024-04-24 20:07:24.003774] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.004 [2024-04-24 20:07:24.114527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.939 20:07:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:42.939 20:07:24 -- common/autotest_common.sh@850 -- # return 0 00:15:42.939 20:07:24 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:42.939 20:07:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.939 20:07:24 -- common/autotest_common.sh@10 -- # set +x 00:15:42.939 NVMe0n1 00:15:42.939 20:07:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.939 20:07:24 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:42.939 Running I/O for 10 seconds... 00:15:52.937 00:15:52.937 Latency(us) 00:15:52.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.937 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:52.937 Verification LBA range: start 0x0 length 0x4000 00:15:52.937 NVMe0n1 : 10.09 8842.08 34.54 0.00 0.00 115326.51 22780.20 86999.76 00:15:52.937 =================================================================================================================== 00:15:52.937 Total : 8842.08 34.54 0.00 0.00 115326.51 22780.20 86999.76 00:15:52.937 0 00:15:52.937 20:07:35 -- target/queue_depth.sh@39 -- # killprocess 67055 00:15:52.937 20:07:35 -- common/autotest_common.sh@936 -- # '[' -z 67055 ']' 00:15:52.937 20:07:35 -- common/autotest_common.sh@940 -- # kill -0 67055 00:15:52.937 20:07:35 -- common/autotest_common.sh@941 -- # uname 00:15:52.937 20:07:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:52.937 20:07:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67055 00:15:52.937 20:07:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:52.937 20:07:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:52.937 killing process with pid 67055 00:15:52.937 20:07:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67055' 00:15:52.937 20:07:35 -- common/autotest_common.sh@955 -- # kill 67055 00:15:52.937 Received shutdown signal, test time was about 10.000000 seconds 00:15:52.937 00:15:52.937 Latency(us) 00:15:52.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.937 =================================================================================================================== 00:15:52.937 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:52.937 20:07:35 -- common/autotest_common.sh@960 -- # wait 67055 00:15:53.197 20:07:35 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:53.197 20:07:35 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:53.197 20:07:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:53.197 20:07:35 -- nvmf/common.sh@117 -- # sync 00:15:53.197 20:07:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:53.197 20:07:35 -- nvmf/common.sh@120 -- # set +e 00:15:53.197 20:07:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:53.197 20:07:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:53.197 rmmod nvme_tcp 00:15:53.197 rmmod nvme_fabrics 00:15:53.197 rmmod nvme_keyring 00:15:53.456 20:07:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:53.456 20:07:35 -- nvmf/common.sh@124 -- # set -e 00:15:53.456 20:07:35 -- nvmf/common.sh@125 -- # return 0 00:15:53.456 20:07:35 -- nvmf/common.sh@478 -- # '[' -n 67023 ']' 00:15:53.456 20:07:35 -- nvmf/common.sh@479 -- # killprocess 67023 00:15:53.456 20:07:35 -- common/autotest_common.sh@936 -- # '[' -z 67023 ']' 00:15:53.456 20:07:35 -- common/autotest_common.sh@940 -- # kill -0 67023 00:15:53.456 20:07:35 -- common/autotest_common.sh@941 -- # uname 00:15:53.456 20:07:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:53.456 20:07:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67023 00:15:53.456 20:07:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:53.456 killing process with pid 67023 00:15:53.456 20:07:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:53.456 20:07:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67023' 00:15:53.456 20:07:35 -- common/autotest_common.sh@955 -- # kill 67023 00:15:53.456 [2024-04-24 20:07:35.485175] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:53.456 20:07:35 -- common/autotest_common.sh@960 -- # wait 67023 00:15:53.715 20:07:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:53.715 20:07:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:53.715 20:07:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:53.715 20:07:35 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:53.715 20:07:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:53.715 20:07:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.715 20:07:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.715 20:07:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.715 20:07:35 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:53.715 00:15:53.715 real 0m13.508s 00:15:53.715 user 0m23.660s 00:15:53.715 sys 0m1.960s 00:15:53.715 20:07:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:53.715 20:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:53.715 ************************************ 00:15:53.715 END TEST nvmf_queue_depth 00:15:53.715 ************************************ 00:15:53.715 20:07:35 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:53.715 20:07:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:53.715 20:07:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:53.715 20:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:53.715 ************************************ 00:15:53.715 START TEST nvmf_multipath 00:15:53.715 ************************************ 00:15:53.715 20:07:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:53.976 * Looking for test storage... 00:15:53.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:53.976 20:07:36 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:53.976 20:07:36 -- nvmf/common.sh@7 -- # uname -s 00:15:53.976 20:07:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.976 20:07:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.976 20:07:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.976 20:07:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.976 20:07:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.976 20:07:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.976 20:07:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.976 20:07:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.976 20:07:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.976 20:07:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.976 20:07:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:15:53.976 20:07:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:15:53.976 20:07:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.976 20:07:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.976 20:07:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:53.976 20:07:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.976 20:07:36 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:53.976 20:07:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.976 20:07:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.976 20:07:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.976 20:07:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.976 20:07:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.976 20:07:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.976 20:07:36 -- paths/export.sh@5 -- # export PATH 00:15:53.976 20:07:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.976 20:07:36 -- nvmf/common.sh@47 -- # : 0 00:15:53.976 20:07:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.976 20:07:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.976 20:07:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.976 20:07:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.976 20:07:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.976 20:07:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.976 20:07:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.976 20:07:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.976 20:07:36 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:53.976 20:07:36 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:53.976 20:07:36 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:53.976 20:07:36 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.976 20:07:36 -- target/multipath.sh@43 -- # nvmftestinit 00:15:53.976 20:07:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:53.976 20:07:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.976 20:07:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:53.976 20:07:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:53.976 20:07:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:53.976 20:07:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.976 20:07:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.976 20:07:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.976 20:07:36 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:53.976 20:07:36 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:53.976 20:07:36 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:53.976 20:07:36 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:53.976 20:07:36 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:53.976 20:07:36 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:53.976 20:07:36 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.976 20:07:36 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.976 20:07:36 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:53.976 20:07:36 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:53.976 20:07:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:53.976 20:07:36 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:53.976 20:07:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:53.976 20:07:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.976 20:07:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:53.976 20:07:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:53.976 20:07:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:53.976 20:07:36 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:53.976 20:07:36 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:53.976 20:07:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:53.976 Cannot find device "nvmf_tgt_br" 00:15:53.976 20:07:36 -- nvmf/common.sh@155 -- # true 00:15:53.976 20:07:36 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.976 Cannot find device "nvmf_tgt_br2" 00:15:53.976 20:07:36 -- nvmf/common.sh@156 -- # true 00:15:53.976 20:07:36 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:53.976 20:07:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:53.976 Cannot find device "nvmf_tgt_br" 00:15:53.976 20:07:36 -- nvmf/common.sh@158 -- # true 00:15:53.976 20:07:36 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:53.976 Cannot find device "nvmf_tgt_br2" 00:15:53.976 20:07:36 -- nvmf/common.sh@159 -- # true 00:15:53.976 20:07:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:53.976 20:07:36 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:53.976 20:07:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.976 20:07:36 -- nvmf/common.sh@162 -- # true 00:15:53.976 20:07:36 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.976 20:07:36 -- nvmf/common.sh@163 -- # true 00:15:53.976 20:07:36 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:53.976 20:07:36 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:53.976 20:07:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:54.236 20:07:36 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:54.236 20:07:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:54.236 20:07:36 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:54.236 20:07:36 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:54.236 20:07:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:54.236 20:07:36 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:54.236 20:07:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:54.236 20:07:36 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:54.236 20:07:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:54.236 20:07:36 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:54.236 20:07:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:54.236 20:07:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:54.236 20:07:36 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:54.236 20:07:36 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:54.236 20:07:36 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:54.236 20:07:36 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:54.236 20:07:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:54.236 20:07:36 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:54.236 20:07:36 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:54.236 20:07:36 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:54.236 20:07:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:54.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:15:54.236 00:15:54.236 --- 10.0.0.2 ping statistics --- 00:15:54.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.236 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:15:54.236 20:07:36 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:54.236 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:54.236 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:15:54.236 00:15:54.236 --- 10.0.0.3 ping statistics --- 00:15:54.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.236 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:15:54.236 20:07:36 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:54.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:54.236 00:15:54.236 --- 10.0.0.1 ping statistics --- 00:15:54.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.236 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:54.236 20:07:36 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.236 20:07:36 -- nvmf/common.sh@422 -- # return 0 00:15:54.236 20:07:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:54.236 20:07:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.236 20:07:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:54.237 20:07:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:54.237 20:07:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.237 20:07:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:54.237 20:07:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:54.237 20:07:36 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:54.237 20:07:36 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:54.237 20:07:36 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:54.237 20:07:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:54.237 20:07:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:54.237 20:07:36 -- common/autotest_common.sh@10 -- # set +x 00:15:54.237 20:07:36 -- nvmf/common.sh@470 -- # nvmfpid=67370 00:15:54.237 20:07:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:54.237 20:07:36 -- nvmf/common.sh@471 -- # waitforlisten 67370 00:15:54.237 20:07:36 -- common/autotest_common.sh@817 -- # '[' -z 67370 ']' 00:15:54.237 20:07:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.237 20:07:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:54.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.237 20:07:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.237 20:07:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:54.237 20:07:36 -- common/autotest_common.sh@10 -- # set +x 00:15:54.237 [2024-04-24 20:07:36.476609] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:15:54.237 [2024-04-24 20:07:36.476715] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.496 [2024-04-24 20:07:36.609227] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:54.754 [2024-04-24 20:07:36.752690] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.754 [2024-04-24 20:07:36.752792] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.754 [2024-04-24 20:07:36.752807] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.754 [2024-04-24 20:07:36.752816] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.754 [2024-04-24 20:07:36.752823] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.754 [2024-04-24 20:07:36.752984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.754 [2024-04-24 20:07:36.753251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.754 [2024-04-24 20:07:36.753462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:54.754 [2024-04-24 20:07:36.753466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.367 20:07:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:55.367 20:07:37 -- common/autotest_common.sh@850 -- # return 0 00:15:55.367 20:07:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:55.367 20:07:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:55.367 20:07:37 -- common/autotest_common.sh@10 -- # set +x 00:15:55.367 20:07:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.367 20:07:37 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:55.624 [2024-04-24 20:07:37.655026] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.624 20:07:37 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:55.880 Malloc0 00:15:55.880 20:07:37 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:56.137 20:07:38 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:56.395 20:07:38 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.653 [2024-04-24 20:07:38.680636] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:56.653 [2024-04-24 20:07:38.680922] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.653 20:07:38 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:56.653 [2024-04-24 20:07:38.896668] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:56.911 20:07:38 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf --hostid=19152f61-83a6-4d7e-88f6-d601ac0cc1cf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:56.911 20:07:39 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf --hostid=19152f61-83a6-4d7e-88f6-d601ac0cc1cf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:56.911 20:07:39 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:56.911 20:07:39 -- common/autotest_common.sh@1184 -- # local i=0 00:15:56.911 20:07:39 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:56.911 20:07:39 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:56.911 20:07:39 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:59.441 20:07:41 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:59.441 20:07:41 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:59.441 20:07:41 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:59.441 20:07:41 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:59.441 20:07:41 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:59.441 20:07:41 -- common/autotest_common.sh@1194 -- # return 0 00:15:59.441 20:07:41 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:59.441 20:07:41 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:59.441 20:07:41 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:59.441 20:07:41 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:59.441 20:07:41 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:59.441 20:07:41 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:59.441 20:07:41 -- target/multipath.sh@38 -- # return 0 00:15:59.441 20:07:41 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:59.441 20:07:41 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:59.441 20:07:41 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:59.441 20:07:41 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:59.441 20:07:41 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:59.441 20:07:41 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:59.441 20:07:41 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:59.441 20:07:41 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:59.441 20:07:41 -- target/multipath.sh@22 -- # local timeout=20 00:15:59.441 20:07:41 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:59.441 20:07:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:59.441 20:07:41 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:59.441 20:07:41 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:59.441 20:07:41 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:59.441 20:07:41 -- target/multipath.sh@22 -- # local timeout=20 00:15:59.441 20:07:41 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:59.441 20:07:41 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:59.441 20:07:41 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:59.441 20:07:41 -- target/multipath.sh@85 -- # echo numa 00:15:59.441 20:07:41 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:59.442 20:07:41 -- target/multipath.sh@88 -- # fio_pid=67471 00:15:59.442 20:07:41 -- target/multipath.sh@90 -- # sleep 1 00:15:59.442 [global] 00:15:59.442 thread=1 00:15:59.442 invalidate=1 00:15:59.442 rw=randrw 00:15:59.442 time_based=1 00:15:59.442 runtime=6 00:15:59.442 ioengine=libaio 00:15:59.442 direct=1 00:15:59.442 bs=4096 00:15:59.442 iodepth=128 00:15:59.442 norandommap=0 00:15:59.442 numjobs=1 00:15:59.442 00:15:59.442 verify_dump=1 00:15:59.442 verify_backlog=512 00:15:59.442 verify_state_save=0 00:15:59.442 do_verify=1 00:15:59.442 verify=crc32c-intel 00:15:59.442 [job0] 00:15:59.442 filename=/dev/nvme0n1 00:15:59.442 Could not set queue depth (nvme0n1) 00:15:59.442 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:59.442 fio-3.35 00:15:59.442 Starting 1 thread 00:16:00.040 20:07:42 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:00.329 20:07:42 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:00.589 20:07:42 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:16:00.589 20:07:42 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:16:00.589 20:07:42 -- target/multipath.sh@22 -- # local timeout=20 00:16:00.589 20:07:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:00.589 20:07:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:00.589 20:07:42 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:00.589 20:07:42 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:16:00.589 20:07:42 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:16:00.589 20:07:42 -- target/multipath.sh@22 -- # local timeout=20 00:16:00.589 20:07:42 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:00.589 20:07:42 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:00.589 20:07:42 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:00.589 20:07:42 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:00.847 20:07:43 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:01.105 20:07:43 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:16:01.105 20:07:43 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:16:01.105 20:07:43 -- target/multipath.sh@22 -- # local timeout=20 00:16:01.105 20:07:43 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:01.105 20:07:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:01.105 20:07:43 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:01.105 20:07:43 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:16:01.105 20:07:43 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:16:01.105 20:07:43 -- target/multipath.sh@22 -- # local timeout=20 00:16:01.105 20:07:43 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:01.105 20:07:43 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:01.105 20:07:43 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:01.105 20:07:43 -- target/multipath.sh@104 -- # wait 67471 00:16:05.359 00:16:05.359 job0: (groupid=0, jobs=1): err= 0: pid=67492: Wed Apr 24 20:07:47 2024 00:16:05.359 read: IOPS=10.5k, BW=41.2MiB/s (43.2MB/s)(247MiB/6006msec) 00:16:05.359 slat (nsec): min=1519, max=5903.7k, avg=54702.87, stdev=216179.82 00:16:05.359 clat (usec): min=1362, max=24456, avg=8181.24, stdev=1427.41 00:16:05.359 lat (usec): min=1387, max=24465, avg=8235.94, stdev=1433.07 00:16:05.359 clat percentiles (usec): 00:16:05.359 | 1.00th=[ 4555], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 7439], 00:16:05.359 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8160], 00:16:05.359 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[11469], 00:16:05.359 | 99.00th=[12780], 99.50th=[13173], 99.90th=[17433], 99.95th=[20841], 00:16:05.359 | 99.99th=[22676] 00:16:05.359 bw ( KiB/s): min= 1576, max=28648, per=53.42%, avg=22510.67, stdev=8800.86, samples=12 00:16:05.359 iops : min= 394, max= 7162, avg=5627.67, stdev=2200.21, samples=12 00:16:05.359 write: IOPS=6581, BW=25.7MiB/s (27.0MB/s)(132MiB/5146msec); 0 zone resets 00:16:05.359 slat (usec): min=3, max=6305, avg=65.80, stdev=158.55 00:16:05.359 clat (usec): min=1138, max=22643, avg=7200.28, stdev=1360.16 00:16:05.359 lat (usec): min=1198, max=22666, avg=7266.07, stdev=1362.94 00:16:05.359 clat percentiles (usec): 00:16:05.359 | 1.00th=[ 3326], 5.00th=[ 4621], 10.00th=[ 5735], 20.00th=[ 6652], 00:16:05.359 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7504], 00:16:05.359 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8225], 95.00th=[ 8586], 00:16:05.359 | 99.00th=[11731], 99.50th=[12649], 99.90th=[18744], 99.95th=[19792], 00:16:05.359 | 99.99th=[22414] 00:16:05.359 bw ( KiB/s): min= 1664, max=28720, per=85.62%, avg=22538.00, stdev=8608.25, samples=12 00:16:05.359 iops : min= 416, max= 7180, avg=5634.50, stdev=2152.06, samples=12 00:16:05.359 lat (msec) : 2=0.04%, 4=1.01%, 10=93.55%, 20=5.33%, 50=0.06% 00:16:05.359 cpu : usr=4.60%, sys=22.96%, ctx=5755, majf=0, minf=151 00:16:05.359 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:05.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:05.359 issued rwts: total=63273,33866,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.359 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:05.359 00:16:05.359 Run status group 0 (all jobs): 00:16:05.359 READ: bw=41.2MiB/s (43.2MB/s), 41.2MiB/s-41.2MiB/s (43.2MB/s-43.2MB/s), io=247MiB (259MB), run=6006-6006msec 00:16:05.359 WRITE: bw=25.7MiB/s (27.0MB/s), 25.7MiB/s-25.7MiB/s (27.0MB/s-27.0MB/s), io=132MiB (139MB), run=5146-5146msec 00:16:05.359 00:16:05.359 Disk stats (read/write): 00:16:05.359 nvme0n1: ios=62613/32967, merge=0/0, ticks=488800/220077, in_queue=708877, util=98.63% 00:16:05.359 20:07:47 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:05.618 20:07:47 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:06.185 20:07:48 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:16:06.185 20:07:48 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:16:06.185 20:07:48 -- target/multipath.sh@22 -- # local timeout=20 00:16:06.185 20:07:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:06.185 20:07:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:06.185 20:07:48 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:06.185 20:07:48 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:16:06.185 20:07:48 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:16:06.185 20:07:48 -- target/multipath.sh@22 -- # local timeout=20 00:16:06.185 20:07:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:06.185 20:07:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:06.185 20:07:48 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:06.185 20:07:48 -- target/multipath.sh@113 -- # echo round-robin 00:16:06.185 20:07:48 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:16:06.185 20:07:48 -- target/multipath.sh@116 -- # fio_pid=67574 00:16:06.185 20:07:48 -- target/multipath.sh@118 -- # sleep 1 00:16:06.185 [global] 00:16:06.185 thread=1 00:16:06.185 invalidate=1 00:16:06.185 rw=randrw 00:16:06.185 time_based=1 00:16:06.185 runtime=6 00:16:06.185 ioengine=libaio 00:16:06.185 direct=1 00:16:06.185 bs=4096 00:16:06.185 iodepth=128 00:16:06.185 norandommap=0 00:16:06.185 numjobs=1 00:16:06.185 00:16:06.185 verify_dump=1 00:16:06.185 verify_backlog=512 00:16:06.185 verify_state_save=0 00:16:06.185 do_verify=1 00:16:06.185 verify=crc32c-intel 00:16:06.185 [job0] 00:16:06.185 filename=/dev/nvme0n1 00:16:06.185 Could not set queue depth (nvme0n1) 00:16:06.185 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:06.185 fio-3.35 00:16:06.185 Starting 1 thread 00:16:07.119 20:07:49 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:07.377 20:07:49 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:07.377 20:07:49 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:16:07.377 20:07:49 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:16:07.377 20:07:49 -- target/multipath.sh@22 -- # local timeout=20 00:16:07.377 20:07:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:07.377 20:07:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:07.377 20:07:49 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:07.377 20:07:49 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:16:07.377 20:07:49 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:16:07.377 20:07:49 -- target/multipath.sh@22 -- # local timeout=20 00:16:07.377 20:07:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:07.377 20:07:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:07.377 20:07:49 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:07.377 20:07:49 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:07.635 20:07:49 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:07.894 20:07:50 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:16:07.894 20:07:50 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:16:07.894 20:07:50 -- target/multipath.sh@22 -- # local timeout=20 00:16:07.894 20:07:50 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:07.894 20:07:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:07.894 20:07:50 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:07.894 20:07:50 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:16:07.894 20:07:50 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:16:07.894 20:07:50 -- target/multipath.sh@22 -- # local timeout=20 00:16:07.894 20:07:50 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:07.894 20:07:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:07.894 20:07:50 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:07.894 20:07:50 -- target/multipath.sh@132 -- # wait 67574 00:16:13.177 00:16:13.177 job0: (groupid=0, jobs=1): err= 0: pid=67595: Wed Apr 24 20:07:54 2024 00:16:13.177 read: IOPS=12.1k, BW=47.4MiB/s (49.7MB/s)(284MiB/6002msec) 00:16:13.177 slat (usec): min=2, max=5344, avg=40.79, stdev=172.17 00:16:13.177 clat (usec): min=266, max=15287, avg=7246.84, stdev=1722.99 00:16:13.177 lat (usec): min=278, max=15297, avg=7287.64, stdev=1731.90 00:16:13.177 clat percentiles (usec): 00:16:13.177 | 1.00th=[ 2073], 5.00th=[ 4178], 10.00th=[ 5211], 20.00th=[ 6259], 00:16:13.177 | 30.00th=[ 6783], 40.00th=[ 7111], 50.00th=[ 7373], 60.00th=[ 7570], 00:16:13.177 | 70.00th=[ 7832], 80.00th=[ 8160], 90.00th=[ 8848], 95.00th=[10290], 00:16:13.177 | 99.00th=[12256], 99.50th=[12649], 99.90th=[13698], 99.95th=[13960], 00:16:13.177 | 99.99th=[14484] 00:16:13.177 bw ( KiB/s): min= 8384, max=35560, per=53.66%, avg=26033.91, stdev=8347.06, samples=11 00:16:13.177 iops : min= 2096, max= 8890, avg=6508.45, stdev=2086.76, samples=11 00:16:13.177 write: IOPS=7230, BW=28.2MiB/s (29.6MB/s)(149MiB/5274msec); 0 zone resets 00:16:13.177 slat (usec): min=4, max=6516, avg=54.97, stdev=120.86 00:16:13.177 clat (usec): min=383, max=13201, avg=6122.00, stdev=1491.40 00:16:13.177 lat (usec): min=419, max=13226, avg=6176.98, stdev=1501.71 00:16:13.177 clat percentiles (usec): 00:16:13.177 | 1.00th=[ 1827], 5.00th=[ 3392], 10.00th=[ 4015], 20.00th=[ 4817], 00:16:13.177 | 30.00th=[ 5604], 40.00th=[ 6194], 50.00th=[ 6456], 60.00th=[ 6718], 00:16:13.177 | 70.00th=[ 6980], 80.00th=[ 7242], 90.00th=[ 7570], 95.00th=[ 7963], 00:16:13.177 | 99.00th=[ 9503], 99.50th=[10552], 99.90th=[11600], 99.95th=[12387], 00:16:13.177 | 99.99th=[13042] 00:16:13.177 bw ( KiB/s): min= 8680, max=36336, per=89.99%, avg=26028.18, stdev=8209.63, samples=11 00:16:13.177 iops : min= 2170, max= 9084, avg=6507.00, stdev=2052.40, samples=11 00:16:13.177 lat (usec) : 500=0.03%, 750=0.05%, 1000=0.16% 00:16:13.177 lat (msec) : 2=0.78%, 4=5.21%, 10=89.77%, 20=3.99% 00:16:13.177 cpu : usr=5.83%, sys=26.81%, ctx=6958, majf=0, minf=108 00:16:13.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:13.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.177 issued rwts: total=72792,38135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.177 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.177 00:16:13.177 Run status group 0 (all jobs): 00:16:13.177 READ: bw=47.4MiB/s (49.7MB/s), 47.4MiB/s-47.4MiB/s (49.7MB/s-49.7MB/s), io=284MiB (298MB), run=6002-6002msec 00:16:13.177 WRITE: bw=28.2MiB/s (29.6MB/s), 28.2MiB/s-28.2MiB/s (29.6MB/s-29.6MB/s), io=149MiB (156MB), run=5274-5274msec 00:16:13.177 00:16:13.177 Disk stats (read/write): 00:16:13.178 nvme0n1: ios=71264/38135, merge=0/0, ticks=482472/212370, in_queue=694842, util=98.65% 00:16:13.178 20:07:54 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:13.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:13.178 20:07:54 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:13.178 20:07:54 -- common/autotest_common.sh@1205 -- # local i=0 00:16:13.178 20:07:54 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:13.178 20:07:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.178 20:07:54 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.178 20:07:54 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:13.178 20:07:54 -- common/autotest_common.sh@1217 -- # return 0 00:16:13.178 20:07:54 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:13.178 20:07:54 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:16:13.178 20:07:54 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:16:13.178 20:07:54 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:16:13.178 20:07:54 -- target/multipath.sh@144 -- # nvmftestfini 00:16:13.178 20:07:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:13.178 20:07:54 -- nvmf/common.sh@117 -- # sync 00:16:13.178 20:07:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:13.178 20:07:54 -- nvmf/common.sh@120 -- # set +e 00:16:13.178 20:07:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:13.178 20:07:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:13.178 rmmod nvme_tcp 00:16:13.178 rmmod nvme_fabrics 00:16:13.178 rmmod nvme_keyring 00:16:13.178 20:07:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:13.178 20:07:54 -- nvmf/common.sh@124 -- # set -e 00:16:13.178 20:07:54 -- nvmf/common.sh@125 -- # return 0 00:16:13.178 20:07:54 -- nvmf/common.sh@478 -- # '[' -n 67370 ']' 00:16:13.178 20:07:54 -- nvmf/common.sh@479 -- # killprocess 67370 00:16:13.178 20:07:54 -- common/autotest_common.sh@936 -- # '[' -z 67370 ']' 00:16:13.178 20:07:54 -- common/autotest_common.sh@940 -- # kill -0 67370 00:16:13.178 20:07:54 -- common/autotest_common.sh@941 -- # uname 00:16:13.178 20:07:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:13.178 20:07:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67370 00:16:13.178 killing process with pid 67370 00:16:13.178 20:07:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:13.178 20:07:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:13.178 20:07:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67370' 00:16:13.178 20:07:54 -- common/autotest_common.sh@955 -- # kill 67370 00:16:13.178 [2024-04-24 20:07:54.868347] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:13.178 20:07:54 -- common/autotest_common.sh@960 -- # wait 67370 00:16:13.178 20:07:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:13.178 20:07:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:13.178 20:07:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:13.178 20:07:55 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:13.178 20:07:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:13.178 20:07:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.178 20:07:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.178 20:07:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.178 20:07:55 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:13.178 00:16:13.178 real 0m19.278s 00:16:13.178 user 1m13.471s 00:16:13.178 sys 0m8.317s 00:16:13.178 20:07:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:13.178 20:07:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.178 ************************************ 00:16:13.178 END TEST nvmf_multipath 00:16:13.178 ************************************ 00:16:13.178 20:07:55 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:13.178 20:07:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:13.178 20:07:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:13.178 20:07:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.178 ************************************ 00:16:13.178 START TEST nvmf_zcopy 00:16:13.178 ************************************ 00:16:13.178 20:07:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:13.178 * Looking for test storage... 00:16:13.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:13.438 20:07:55 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:13.438 20:07:55 -- nvmf/common.sh@7 -- # uname -s 00:16:13.438 20:07:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.438 20:07:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.438 20:07:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.438 20:07:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.438 20:07:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.438 20:07:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.438 20:07:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.438 20:07:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.438 20:07:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.438 20:07:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.438 20:07:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:16:13.438 20:07:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:16:13.438 20:07:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.438 20:07:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.438 20:07:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:13.438 20:07:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.438 20:07:55 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:13.438 20:07:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.438 20:07:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.438 20:07:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.438 20:07:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.438 20:07:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.438 20:07:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.438 20:07:55 -- paths/export.sh@5 -- # export PATH 00:16:13.438 20:07:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.438 20:07:55 -- nvmf/common.sh@47 -- # : 0 00:16:13.438 20:07:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:13.438 20:07:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:13.438 20:07:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.438 20:07:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.438 20:07:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.438 20:07:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:13.438 20:07:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:13.438 20:07:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:13.438 20:07:55 -- target/zcopy.sh@12 -- # nvmftestinit 00:16:13.438 20:07:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:13.438 20:07:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.438 20:07:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:13.438 20:07:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:13.438 20:07:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:13.438 20:07:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.438 20:07:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.438 20:07:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.438 20:07:55 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:13.438 20:07:55 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:13.438 20:07:55 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:13.438 20:07:55 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:13.438 20:07:55 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:13.438 20:07:55 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:13.438 20:07:55 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.438 20:07:55 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:13.438 20:07:55 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:13.438 20:07:55 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:13.438 20:07:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:13.438 20:07:55 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:13.438 20:07:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:13.438 20:07:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.438 20:07:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:13.438 20:07:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:13.438 20:07:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:13.438 20:07:55 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:13.438 20:07:55 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:13.438 20:07:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:13.438 Cannot find device "nvmf_tgt_br" 00:16:13.438 20:07:55 -- nvmf/common.sh@155 -- # true 00:16:13.438 20:07:55 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:13.438 Cannot find device "nvmf_tgt_br2" 00:16:13.438 20:07:55 -- nvmf/common.sh@156 -- # true 00:16:13.438 20:07:55 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:13.438 20:07:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:13.439 Cannot find device "nvmf_tgt_br" 00:16:13.439 20:07:55 -- nvmf/common.sh@158 -- # true 00:16:13.439 20:07:55 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:13.439 Cannot find device "nvmf_tgt_br2" 00:16:13.439 20:07:55 -- nvmf/common.sh@159 -- # true 00:16:13.439 20:07:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:13.439 20:07:55 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:13.439 20:07:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:13.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.439 20:07:55 -- nvmf/common.sh@162 -- # true 00:16:13.439 20:07:55 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:13.439 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.439 20:07:55 -- nvmf/common.sh@163 -- # true 00:16:13.439 20:07:55 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:13.439 20:07:55 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:13.439 20:07:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:13.439 20:07:55 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:13.698 20:07:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:13.698 20:07:55 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:13.698 20:07:55 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:13.698 20:07:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:13.698 20:07:55 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:13.698 20:07:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:13.698 20:07:55 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:13.698 20:07:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:13.698 20:07:55 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:13.698 20:07:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:13.698 20:07:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:13.698 20:07:55 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:13.698 20:07:55 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:13.698 20:07:55 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:13.698 20:07:55 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:13.698 20:07:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:13.698 20:07:55 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:13.698 20:07:55 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:13.698 20:07:55 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:13.698 20:07:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:13.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:16:13.698 00:16:13.698 --- 10.0.0.2 ping statistics --- 00:16:13.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.698 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:16:13.698 20:07:55 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:13.698 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:13.698 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:16:13.698 00:16:13.698 --- 10.0.0.3 ping statistics --- 00:16:13.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.698 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:16:13.698 20:07:55 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:13.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:16:13.698 00:16:13.698 --- 10.0.0.1 ping statistics --- 00:16:13.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.699 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:13.699 20:07:55 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.699 20:07:55 -- nvmf/common.sh@422 -- # return 0 00:16:13.699 20:07:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:13.699 20:07:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.699 20:07:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:13.699 20:07:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:13.699 20:07:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.699 20:07:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:13.699 20:07:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:13.958 20:07:55 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:13.958 20:07:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:13.958 20:07:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:13.958 20:07:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.958 20:07:55 -- nvmf/common.sh@470 -- # nvmfpid=67847 00:16:13.958 20:07:55 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:13.958 20:07:55 -- nvmf/common.sh@471 -- # waitforlisten 67847 00:16:13.958 20:07:55 -- common/autotest_common.sh@817 -- # '[' -z 67847 ']' 00:16:13.958 20:07:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.958 20:07:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:13.958 20:07:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.958 20:07:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:13.958 20:07:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.958 [2024-04-24 20:07:56.015835] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:16:13.958 [2024-04-24 20:07:56.015922] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.958 [2024-04-24 20:07:56.138310] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.217 [2024-04-24 20:07:56.235428] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.217 [2024-04-24 20:07:56.235473] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.217 [2024-04-24 20:07:56.235495] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.217 [2024-04-24 20:07:56.235500] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.217 [2024-04-24 20:07:56.235505] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.217 [2024-04-24 20:07:56.235528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.787 20:07:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:14.787 20:07:56 -- common/autotest_common.sh@850 -- # return 0 00:16:14.787 20:07:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:14.787 20:07:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:14.787 20:07:56 -- common/autotest_common.sh@10 -- # set +x 00:16:14.787 20:07:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.787 20:07:56 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:14.787 20:07:56 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:14.787 20:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.787 20:07:56 -- common/autotest_common.sh@10 -- # set +x 00:16:14.787 [2024-04-24 20:07:56.939044] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.787 20:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.787 20:07:56 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:14.787 20:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.787 20:07:56 -- common/autotest_common.sh@10 -- # set +x 00:16:14.787 20:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.787 20:07:56 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.787 20:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.787 20:07:56 -- common/autotest_common.sh@10 -- # set +x 00:16:14.787 [2024-04-24 20:07:56.962928] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:14.787 [2024-04-24 20:07:56.963170] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.787 20:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.787 20:07:56 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:14.787 20:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.787 20:07:56 -- common/autotest_common.sh@10 -- # set +x 00:16:14.787 20:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.787 20:07:56 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:14.787 20:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.787 20:07:56 -- common/autotest_common.sh@10 -- # set +x 00:16:14.787 malloc0 00:16:14.787 20:07:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.787 20:07:56 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:14.787 20:07:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.787 20:07:56 -- common/autotest_common.sh@10 -- # set +x 00:16:14.787 20:07:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.787 20:07:57 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:14.787 20:07:57 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:14.787 20:07:57 -- nvmf/common.sh@521 -- # config=() 00:16:14.787 20:07:57 -- nvmf/common.sh@521 -- # local subsystem config 00:16:14.787 20:07:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:14.787 20:07:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:14.787 { 00:16:14.787 "params": { 00:16:14.787 "name": "Nvme$subsystem", 00:16:14.787 "trtype": "$TEST_TRANSPORT", 00:16:14.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:14.787 "adrfam": "ipv4", 00:16:14.787 "trsvcid": "$NVMF_PORT", 00:16:14.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:14.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:14.787 "hdgst": ${hdgst:-false}, 00:16:14.787 "ddgst": ${ddgst:-false} 00:16:14.787 }, 00:16:14.787 "method": "bdev_nvme_attach_controller" 00:16:14.787 } 00:16:14.787 EOF 00:16:14.787 )") 00:16:14.787 20:07:57 -- nvmf/common.sh@543 -- # cat 00:16:14.787 20:07:57 -- nvmf/common.sh@545 -- # jq . 00:16:14.787 20:07:57 -- nvmf/common.sh@546 -- # IFS=, 00:16:14.787 20:07:57 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:14.787 "params": { 00:16:14.787 "name": "Nvme1", 00:16:14.787 "trtype": "tcp", 00:16:14.787 "traddr": "10.0.0.2", 00:16:14.787 "adrfam": "ipv4", 00:16:14.787 "trsvcid": "4420", 00:16:14.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:14.787 "hdgst": false, 00:16:14.787 "ddgst": false 00:16:14.787 }, 00:16:14.787 "method": "bdev_nvme_attach_controller" 00:16:14.787 }' 00:16:15.047 [2024-04-24 20:07:57.060272] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:16:15.047 [2024-04-24 20:07:57.060339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67880 ] 00:16:15.047 [2024-04-24 20:07:57.198154] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.047 [2024-04-24 20:07:57.299248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.307 Running I/O for 10 seconds... 00:16:25.300 00:16:25.300 Latency(us) 00:16:25.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.300 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:25.300 Verification LBA range: start 0x0 length 0x1000 00:16:25.300 Nvme1n1 : 10.02 6932.61 54.16 0.00 0.00 18409.67 3248.18 30449.91 00:16:25.300 =================================================================================================================== 00:16:25.300 Total : 6932.61 54.16 0.00 0.00 18409.67 3248.18 30449.91 00:16:25.560 20:08:07 -- target/zcopy.sh@39 -- # perfpid=68002 00:16:25.560 20:08:07 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:25.560 20:08:07 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:25.560 20:08:07 -- common/autotest_common.sh@10 -- # set +x 00:16:25.560 20:08:07 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:25.560 20:08:07 -- nvmf/common.sh@521 -- # config=() 00:16:25.560 20:08:07 -- nvmf/common.sh@521 -- # local subsystem config 00:16:25.560 20:08:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:25.560 20:08:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:25.560 { 00:16:25.560 "params": { 00:16:25.560 "name": "Nvme$subsystem", 00:16:25.560 "trtype": "$TEST_TRANSPORT", 00:16:25.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.560 "adrfam": "ipv4", 00:16:25.560 "trsvcid": "$NVMF_PORT", 00:16:25.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.560 "hdgst": ${hdgst:-false}, 00:16:25.560 "ddgst": ${ddgst:-false} 00:16:25.560 }, 00:16:25.560 "method": "bdev_nvme_attach_controller" 00:16:25.560 } 00:16:25.560 EOF 00:16:25.560 )") 00:16:25.560 [2024-04-24 20:08:07.684556] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.560 [2024-04-24 20:08:07.684598] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.560 20:08:07 -- nvmf/common.sh@543 -- # cat 00:16:25.560 20:08:07 -- nvmf/common.sh@545 -- # jq . 00:16:25.560 [2024-04-24 20:08:07.692519] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.560 [2024-04-24 20:08:07.692549] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.560 20:08:07 -- nvmf/common.sh@546 -- # IFS=, 00:16:25.560 20:08:07 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:25.560 "params": { 00:16:25.560 "name": "Nvme1", 00:16:25.560 "trtype": "tcp", 00:16:25.560 "traddr": "10.0.0.2", 00:16:25.560 "adrfam": "ipv4", 00:16:25.560 "trsvcid": "4420", 00:16:25.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:25.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:25.560 "hdgst": false, 00:16:25.560 "ddgst": false 00:16:25.560 }, 00:16:25.560 "method": "bdev_nvme_attach_controller" 00:16:25.560 }' 00:16:25.560 [2024-04-24 20:08:07.700510] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.560 [2024-04-24 20:08:07.700547] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.560 [2024-04-24 20:08:07.708488] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.560 [2024-04-24 20:08:07.708543] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.560 [2024-04-24 20:08:07.712860] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:16:25.560 [2024-04-24 20:08:07.712930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68002 ] 00:16:25.560 [2024-04-24 20:08:07.716470] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.560 [2024-04-24 20:08:07.716497] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.560 [2024-04-24 20:08:07.724459] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.560 [2024-04-24 20:08:07.724490] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.560 [2024-04-24 20:08:07.732451] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.560 [2024-04-24 20:08:07.732474] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.560 [2024-04-24 20:08:07.744453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.560 [2024-04-24 20:08:07.744485] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.560 [2024-04-24 20:08:07.756423] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.560 [2024-04-24 20:08:07.756454] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.560 [2024-04-24 20:08:07.768398] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.560 [2024-04-24 20:08:07.768426] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.560 [2024-04-24 20:08:07.780359] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.560 [2024-04-24 20:08:07.780386] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.560 [2024-04-24 20:08:07.792365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.560 [2024-04-24 20:08:07.792400] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.560 [2024-04-24 20:08:07.804329] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.560 [2024-04-24 20:08:07.804357] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.821 [2024-04-24 20:08:07.816341] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.821 [2024-04-24 20:08:07.816388] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.821 [2024-04-24 20:08:07.828310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.821 [2024-04-24 20:08:07.828347] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.821 [2024-04-24 20:08:07.840274] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.821 [2024-04-24 20:08:07.840302] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.821 [2024-04-24 20:08:07.852246] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.821 [2024-04-24 20:08:07.852272] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.821 [2024-04-24 20:08:07.853060] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.821 [2024-04-24 20:08:07.864230] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.821 [2024-04-24 20:08:07.864267] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.821 [2024-04-24 20:08:07.876215] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.821 [2024-04-24 20:08:07.876242] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.821 [2024-04-24 20:08:07.888220] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.821 [2024-04-24 20:08:07.888255] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.821 [2024-04-24 20:08:07.900191] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.821 [2024-04-24 20:08:07.900228] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.821 [2024-04-24 20:08:07.912171] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.821 [2024-04-24 20:08:07.912206] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.822 [2024-04-24 20:08:07.924136] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.822 [2024-04-24 20:08:07.924167] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.822 [2024-04-24 20:08:07.936129] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.822 [2024-04-24 20:08:07.936157] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.822 [2024-04-24 20:08:07.948089] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.822 [2024-04-24 20:08:07.948113] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.822 [2024-04-24 20:08:07.958213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.822 [2024-04-24 20:08:07.960085] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.822 [2024-04-24 20:08:07.960116] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.822 [2024-04-24 20:08:07.972088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.822 [2024-04-24 20:08:07.972134] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.822 [2024-04-24 20:08:07.984043] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.822 [2024-04-24 20:08:07.984076] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.822 [2024-04-24 20:08:07.996014] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.822 [2024-04-24 20:08:07.996042] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.822 [2024-04-24 20:08:08.012018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.822 [2024-04-24 20:08:08.012058] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.822 [2024-04-24 20:08:08.023977] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.822 [2024-04-24 20:08:08.024006] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.822 [2024-04-24 20:08:08.035950] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.822 [2024-04-24 20:08:08.035976] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.822 [2024-04-24 20:08:08.047950] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.822 [2024-04-24 20:08:08.047997] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.822 [2024-04-24 20:08:08.059942] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.822 [2024-04-24 20:08:08.059977] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.822 [2024-04-24 20:08:08.071938] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.822 [2024-04-24 20:08:08.071981] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.083915] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.083952] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.095894] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.095928] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.107887] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.107923] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 Running I/O for 5 seconds... 00:16:26.083 [2024-04-24 20:08:08.123860] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.123891] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.139385] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.139427] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.155391] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.155439] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.175429] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.175478] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.192447] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.192496] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.209449] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.209498] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.218977] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.219027] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.229022] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.229068] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.242799] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.242841] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.257757] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.257809] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.269168] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.269215] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.285233] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.285277] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.301529] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.301570] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.319057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.319101] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.083 [2024-04-24 20:08:08.336026] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.083 [2024-04-24 20:08:08.336089] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.342 [2024-04-24 20:08:08.352498] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.342 [2024-04-24 20:08:08.352551] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.342 [2024-04-24 20:08:08.370110] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.342 [2024-04-24 20:08:08.370160] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.342 [2024-04-24 20:08:08.385322] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.342 [2024-04-24 20:08:08.385372] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.342 [2024-04-24 20:08:08.397089] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.342 [2024-04-24 20:08:08.397131] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.342 [2024-04-24 20:08:08.414000] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.343 [2024-04-24 20:08:08.414048] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.343 [2024-04-24 20:08:08.430432] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.343 [2024-04-24 20:08:08.430474] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.343 [2024-04-24 20:08:08.447024] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.343 [2024-04-24 20:08:08.447070] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.343 [2024-04-24 20:08:08.464593] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.343 [2024-04-24 20:08:08.464644] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.343 [2024-04-24 20:08:08.480267] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.343 [2024-04-24 20:08:08.480331] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.343 [2024-04-24 20:08:08.489424] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.343 [2024-04-24 20:08:08.489473] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.343 [2024-04-24 20:08:08.504972] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.343 [2024-04-24 20:08:08.505068] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.343 [2024-04-24 20:08:08.521389] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.343 [2024-04-24 20:08:08.521440] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.343 [2024-04-24 20:08:08.536532] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.343 [2024-04-24 20:08:08.536573] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.343 [2024-04-24 20:08:08.552550] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.343 [2024-04-24 20:08:08.552590] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.343 [2024-04-24 20:08:08.569042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.343 [2024-04-24 20:08:08.569084] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.343 [2024-04-24 20:08:08.585761] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.343 [2024-04-24 20:08:08.585802] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.601 [2024-04-24 20:08:08.602794] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.601 [2024-04-24 20:08:08.602836] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.601 [2024-04-24 20:08:08.617958] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.601 [2024-04-24 20:08:08.617994] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.601 [2024-04-24 20:08:08.632116] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.601 [2024-04-24 20:08:08.632150] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.601 [2024-04-24 20:08:08.648929] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.601 [2024-04-24 20:08:08.648970] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.601 [2024-04-24 20:08:08.665531] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.601 [2024-04-24 20:08:08.665570] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.601 [2024-04-24 20:08:08.682224] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.601 [2024-04-24 20:08:08.682265] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.601 [2024-04-24 20:08:08.699687] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.601 [2024-04-24 20:08:08.699726] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.601 [2024-04-24 20:08:08.717365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.601 [2024-04-24 20:08:08.717420] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.601 [2024-04-24 20:08:08.732861] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.601 [2024-04-24 20:08:08.732906] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.601 [2024-04-24 20:08:08.751329] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.601 [2024-04-24 20:08:08.751390] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.601 [2024-04-24 20:08:08.766479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.601 [2024-04-24 20:08:08.766526] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.601 [2024-04-24 20:08:08.785042] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.601 [2024-04-24 20:08:08.785085] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.601 [2024-04-24 20:08:08.800738] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.601 [2024-04-24 20:08:08.800789] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.601 [2024-04-24 20:08:08.819011] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.601 [2024-04-24 20:08:08.819057] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.601 [2024-04-24 20:08:08.834212] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.601 [2024-04-24 20:08:08.834256] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.601 [2024-04-24 20:08:08.845627] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.601 [2024-04-24 20:08:08.845668] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.860 [2024-04-24 20:08:08.862162] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.860 [2024-04-24 20:08:08.862213] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.861 [2024-04-24 20:08:08.878361] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.861 [2024-04-24 20:08:08.878418] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.861 [2024-04-24 20:08:08.895154] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.861 [2024-04-24 20:08:08.895197] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.861 [2024-04-24 20:08:08.912150] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.861 [2024-04-24 20:08:08.912213] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.861 [2024-04-24 20:08:08.932103] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.861 [2024-04-24 20:08:08.932158] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.861 [2024-04-24 20:08:08.949273] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.861 [2024-04-24 20:08:08.949325] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.861 [2024-04-24 20:08:08.966646] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.861 [2024-04-24 20:08:08.966702] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.861 [2024-04-24 20:08:08.983430] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.861 [2024-04-24 20:08:08.983476] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.861 [2024-04-24 20:08:08.999586] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.861 [2024-04-24 20:08:08.999645] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.861 [2024-04-24 20:08:09.017001] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.861 [2024-04-24 20:08:09.017061] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.861 [2024-04-24 20:08:09.032699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.861 [2024-04-24 20:08:09.032751] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.861 [2024-04-24 20:08:09.051250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.861 [2024-04-24 20:08:09.051299] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.861 [2024-04-24 20:08:09.067053] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.861 [2024-04-24 20:08:09.067099] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.861 [2024-04-24 20:08:09.084216] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.861 [2024-04-24 20:08:09.084262] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.861 [2024-04-24 20:08:09.099914] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.861 [2024-04-24 20:08:09.099959] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.861 [2024-04-24 20:08:09.111634] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.861 [2024-04-24 20:08:09.111676] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.126943] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.126995] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.144644] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.144684] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.159479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.159521] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.175189] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.175242] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.191448] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.191495] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.210348] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.210427] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.221163] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.221232] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.232105] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.232164] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.248686] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.248743] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.265751] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.265818] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.281680] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.281752] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.298505] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.298555] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.315287] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.315349] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.331413] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.331455] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.343289] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.343342] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.352426] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.352468] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.363413] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.363456] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.373365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.373420] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.383651] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.383689] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.393674] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.393717] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.162 [2024-04-24 20:08:09.407206] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.162 [2024-04-24 20:08:09.407251] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.421878] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.421926] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.438276] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.438325] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.455217] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.455267] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.470543] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.470590] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.485597] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.485643] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.496925] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.496970] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.512437] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.512482] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.534123] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.534179] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.551415] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.551466] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.561060] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.561106] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.570525] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.570567] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.583991] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.584039] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.603375] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.603432] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.617817] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.617862] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.633449] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.633505] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.651912] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.651962] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.421 [2024-04-24 20:08:09.667642] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.421 [2024-04-24 20:08:09.667693] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.679 [2024-04-24 20:08:09.684111] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.679 [2024-04-24 20:08:09.684174] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.679 [2024-04-24 20:08:09.701474] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.679 [2024-04-24 20:08:09.701532] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.679 [2024-04-24 20:08:09.717358] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.679 [2024-04-24 20:08:09.717412] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.679 [2024-04-24 20:08:09.732463] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.679 [2024-04-24 20:08:09.732505] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.679 [2024-04-24 20:08:09.749114] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.679 [2024-04-24 20:08:09.749163] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.679 [2024-04-24 20:08:09.766403] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.679 [2024-04-24 20:08:09.766448] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.679 [2024-04-24 20:08:09.783019] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.679 [2024-04-24 20:08:09.783063] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.679 [2024-04-24 20:08:09.799523] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.679 [2024-04-24 20:08:09.799579] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.679 [2024-04-24 20:08:09.816327] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.680 [2024-04-24 20:08:09.816382] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.680 [2024-04-24 20:08:09.832903] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.680 [2024-04-24 20:08:09.832948] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.680 [2024-04-24 20:08:09.849757] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.680 [2024-04-24 20:08:09.849806] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.680 [2024-04-24 20:08:09.865934] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.680 [2024-04-24 20:08:09.865980] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.680 [2024-04-24 20:08:09.884051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.680 [2024-04-24 20:08:09.884099] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.680 [2024-04-24 20:08:09.899727] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.680 [2024-04-24 20:08:09.899778] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.680 [2024-04-24 20:08:09.911113] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.680 [2024-04-24 20:08:09.911177] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.680 [2024-04-24 20:08:09.927018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.680 [2024-04-24 20:08:09.927072] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.938 [2024-04-24 20:08:09.943354] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.938 [2024-04-24 20:08:09.943411] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.938 [2024-04-24 20:08:09.960118] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.938 [2024-04-24 20:08:09.960163] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.938 [2024-04-24 20:08:09.976945] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.938 [2024-04-24 20:08:09.976988] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.938 [2024-04-24 20:08:09.993621] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.938 [2024-04-24 20:08:09.993662] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.938 [2024-04-24 20:08:10.009293] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.938 [2024-04-24 20:08:10.009346] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.938 [2024-04-24 20:08:10.024310] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.938 [2024-04-24 20:08:10.024367] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.938 [2024-04-24 20:08:10.035449] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.938 [2024-04-24 20:08:10.035491] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.938 [2024-04-24 20:08:10.051680] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.938 [2024-04-24 20:08:10.051729] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.938 [2024-04-24 20:08:10.068329] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.938 [2024-04-24 20:08:10.068384] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.938 [2024-04-24 20:08:10.085366] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.938 [2024-04-24 20:08:10.085424] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.938 [2024-04-24 20:08:10.100896] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.938 [2024-04-24 20:08:10.100939] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.938 [2024-04-24 20:08:10.111964] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.938 [2024-04-24 20:08:10.112011] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.938 [2024-04-24 20:08:10.128531] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.938 [2024-04-24 20:08:10.128573] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.938 [2024-04-24 20:08:10.144790] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.938 [2024-04-24 20:08:10.144831] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.938 [2024-04-24 20:08:10.162287] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.938 [2024-04-24 20:08:10.162328] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.938 [2024-04-24 20:08:10.179063] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.938 [2024-04-24 20:08:10.179118] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.196477] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.196525] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.212945] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.212992] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.230127] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.230173] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.247494] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.247545] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.263676] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.263717] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.280719] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.280765] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.290178] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.290218] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.304191] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.304252] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.313098] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.313142] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.323518] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.323559] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.333057] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.333094] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.342965] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.343016] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.352797] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.352851] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.362691] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.362734] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.372197] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.372236] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.381870] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.381905] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.395159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.395196] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.403126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.403161] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.414035] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.414071] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.422542] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.422576] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.432588] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.432618] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.197 [2024-04-24 20:08:10.440555] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.197 [2024-04-24 20:08:10.440594] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.452009] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.452056] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.463534] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.463572] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.471568] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.471601] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.483012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.483046] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.494605] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.494639] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.502322] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.502355] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.513620] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.513658] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.524822] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.524858] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.532813] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.532850] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.548784] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.548822] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.565674] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.565713] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.582972] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.583017] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.599772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.599816] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.616174] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.616215] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.633711] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.633771] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.649452] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.649493] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.666358] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.666404] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.682488] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.682528] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.458 [2024-04-24 20:08:10.700851] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.458 [2024-04-24 20:08:10.700894] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.718 [2024-04-24 20:08:10.715908] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.718 [2024-04-24 20:08:10.715961] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.718 [2024-04-24 20:08:10.731587] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.718 [2024-04-24 20:08:10.731636] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.718 [2024-04-24 20:08:10.748352] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.718 [2024-04-24 20:08:10.748407] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.718 [2024-04-24 20:08:10.764533] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.718 [2024-04-24 20:08:10.764587] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.718 [2024-04-24 20:08:10.782317] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.718 [2024-04-24 20:08:10.782361] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.718 [2024-04-24 20:08:10.797279] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.718 [2024-04-24 20:08:10.797324] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.718 [2024-04-24 20:08:10.808648] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.718 [2024-04-24 20:08:10.808694] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.718 [2024-04-24 20:08:10.825087] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.718 [2024-04-24 20:08:10.825136] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.718 [2024-04-24 20:08:10.842483] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.718 [2024-04-24 20:08:10.842535] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.718 [2024-04-24 20:08:10.858584] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.718 [2024-04-24 20:08:10.858628] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.718 [2024-04-24 20:08:10.875324] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.718 [2024-04-24 20:08:10.875388] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.718 [2024-04-24 20:08:10.893125] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.718 [2024-04-24 20:08:10.893174] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.718 [2024-04-24 20:08:10.908263] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.718 [2024-04-24 20:08:10.908305] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.718 [2024-04-24 20:08:10.924214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.718 [2024-04-24 20:08:10.924258] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.718 [2024-04-24 20:08:10.940600] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.718 [2024-04-24 20:08:10.940663] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.718 [2024-04-24 20:08:10.957872] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.718 [2024-04-24 20:08:10.957916] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.978 [2024-04-24 20:08:10.974754] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.978 [2024-04-24 20:08:10.974803] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.978 [2024-04-24 20:08:10.990639] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.978 [2024-04-24 20:08:10.990684] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.978 [2024-04-24 20:08:11.008403] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.978 [2024-04-24 20:08:11.008447] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.978 [2024-04-24 20:08:11.024499] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.978 [2024-04-24 20:08:11.024538] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.978 [2024-04-24 20:08:11.040522] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.978 [2024-04-24 20:08:11.040562] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.978 [2024-04-24 20:08:11.057231] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.978 [2024-04-24 20:08:11.057272] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.978 [2024-04-24 20:08:11.073746] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.978 [2024-04-24 20:08:11.073786] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.978 [2024-04-24 20:08:11.090663] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.978 [2024-04-24 20:08:11.090707] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.978 [2024-04-24 20:08:11.106555] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.978 [2024-04-24 20:08:11.106597] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.978 [2024-04-24 20:08:11.124202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.978 [2024-04-24 20:08:11.124259] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.978 [2024-04-24 20:08:11.139888] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.978 [2024-04-24 20:08:11.139935] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.979 [2024-04-24 20:08:11.157985] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.979 [2024-04-24 20:08:11.158039] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.979 [2024-04-24 20:08:11.173960] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.979 [2024-04-24 20:08:11.174008] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.979 [2024-04-24 20:08:11.191297] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.979 [2024-04-24 20:08:11.191345] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.979 [2024-04-24 20:08:11.207713] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.979 [2024-04-24 20:08:11.207761] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.979 [2024-04-24 20:08:11.224501] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.979 [2024-04-24 20:08:11.224548] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.239 [2024-04-24 20:08:11.241194] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.239 [2024-04-24 20:08:11.241243] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.239 [2024-04-24 20:08:11.257989] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.239 [2024-04-24 20:08:11.258038] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.239 [2024-04-24 20:08:11.275183] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.239 [2024-04-24 20:08:11.275228] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.239 [2024-04-24 20:08:11.291699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.239 [2024-04-24 20:08:11.291743] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.239 [2024-04-24 20:08:11.308511] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.239 [2024-04-24 20:08:11.308552] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.239 [2024-04-24 20:08:11.325916] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.239 [2024-04-24 20:08:11.325963] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.239 [2024-04-24 20:08:11.342214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.239 [2024-04-24 20:08:11.342269] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.239 [2024-04-24 20:08:11.359250] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.239 [2024-04-24 20:08:11.359312] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.239 [2024-04-24 20:08:11.379967] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.239 [2024-04-24 20:08:11.380018] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.240 [2024-04-24 20:08:11.399733] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.240 [2024-04-24 20:08:11.399786] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.240 [2024-04-24 20:08:11.416849] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.240 [2024-04-24 20:08:11.416898] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.240 [2024-04-24 20:08:11.432661] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.240 [2024-04-24 20:08:11.432702] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.240 [2024-04-24 20:08:11.450165] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.240 [2024-04-24 20:08:11.450212] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.240 [2024-04-24 20:08:11.461569] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.240 [2024-04-24 20:08:11.461609] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.240 [2024-04-24 20:08:11.469477] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.240 [2024-04-24 20:08:11.469513] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.240 [2024-04-24 20:08:11.481265] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.240 [2024-04-24 20:08:11.481305] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.492791] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.492832] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.501107] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.501143] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.512117] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.512152] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.523788] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.523833] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.539436] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.539472] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.555559] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.555602] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.571826] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.571869] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.586871] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.586911] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.603744] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.603806] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.619274] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.619319] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.633514] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.633554] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.644543] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.644583] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.659696] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.659735] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.676814] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.676855] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.693769] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.693817] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.709621] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.709660] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.726214] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.726257] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.500 [2024-04-24 20:08:11.742673] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.500 [2024-04-24 20:08:11.742713] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-24 20:08:11.759905] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-24 20:08:11.759948] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-24 20:08:11.777067] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-24 20:08:11.777107] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-24 20:08:11.793941] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-24 20:08:11.793989] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-24 20:08:11.810010] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-24 20:08:11.810057] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-24 20:08:11.827202] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-24 20:08:11.827252] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-24 20:08:11.843589] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-24 20:08:11.843650] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-24 20:08:11.860890] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-24 20:08:11.860937] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-24 20:08:11.876736] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-24 20:08:11.876784] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-24 20:08:11.894703] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-24 20:08:11.894749] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-24 20:08:11.914546] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-24 20:08:11.914592] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-24 20:08:11.931742] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-24 20:08:11.931786] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-24 20:08:11.948275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-24 20:08:11.948321] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-24 20:08:11.964400] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-24 20:08:11.964444] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-24 20:08:11.982282] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-24 20:08:11.982333] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-24 20:08:11.997154] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-24 20:08:11.997199] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.760 [2024-04-24 20:08:12.012926] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.760 [2024-04-24 20:08:12.012980] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.030701] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.030748] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.046215] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.046261] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.058275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.058322] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.074049] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.074108] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.094321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.094369] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.110457] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.110508] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.121887] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.121943] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.130521] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.130566] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.141891] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.141939] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.153450] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.153500] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.161734] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.161775] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.173205] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.173254] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.184221] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.184265] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.192133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.192172] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.203923] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.203967] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.212928] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.212974] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.225102] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.225145] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.234082] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.234143] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.246023] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.246059] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.020 [2024-04-24 20:08:12.262182] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.020 [2024-04-24 20:08:12.262220] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.279 [2024-04-24 20:08:12.279208] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.279273] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.280 [2024-04-24 20:08:12.294787] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.294846] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.280 [2024-04-24 20:08:12.306678] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.306730] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.280 [2024-04-24 20:08:12.322615] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.322664] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.280 [2024-04-24 20:08:12.338900] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.338945] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.280 [2024-04-24 20:08:12.350107] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.350163] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.280 [2024-04-24 20:08:12.366258] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.366302] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.280 [2024-04-24 20:08:12.382449] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.382497] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.280 [2024-04-24 20:08:12.398889] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.398940] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.280 [2024-04-24 20:08:12.415314] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.415358] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.280 [2024-04-24 20:08:12.427091] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.427141] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.280 [2024-04-24 20:08:12.443086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.443136] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.280 [2024-04-24 20:08:12.460648] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.460696] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.280 [2024-04-24 20:08:12.475929] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.475980] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.280 [2024-04-24 20:08:12.487636] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.487677] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.280 [2024-04-24 20:08:12.503658] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.503709] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.280 [2024-04-24 20:08:12.519926] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.280 [2024-04-24 20:08:12.519971] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.537984] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.538034] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.552867] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.552913] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.569071] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.569128] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.585856] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.585903] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.603197] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.603245] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.619136] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.619177] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.635350] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.635400] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.650155] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.650206] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.666747] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.666789] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.684170] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.684212] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.701078] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.701123] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.716722] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.716767] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.728543] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.728579] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.744720] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.744764] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.754571] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.754608] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.768830] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.768874] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.539 [2024-04-24 20:08:12.780414] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.539 [2024-04-24 20:08:12.780459] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.797 [2024-04-24 20:08:12.799977] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.797 [2024-04-24 20:08:12.800030] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.797 [2024-04-24 20:08:12.811835] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.797 [2024-04-24 20:08:12.811881] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.797 [2024-04-24 20:08:12.820319] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.797 [2024-04-24 20:08:12.820362] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.797 [2024-04-24 20:08:12.835562] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.797 [2024-04-24 20:08:12.835614] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.797 [2024-04-24 20:08:12.846964] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.797 [2024-04-24 20:08:12.847007] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.797 [2024-04-24 20:08:12.855305] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.797 [2024-04-24 20:08:12.855345] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.797 [2024-04-24 20:08:12.867168] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.797 [2024-04-24 20:08:12.867210] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.797 [2024-04-24 20:08:12.877009] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.797 [2024-04-24 20:08:12.877056] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.797 [2024-04-24 20:08:12.886732] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:12.886773] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:12.896447] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:12.896492] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:12.906248] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:12.906295] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:12.915828] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:12.915868] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:12.925376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:12.925431] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:12.934684] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:12.934721] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:12.943875] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:12.943908] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:12.952585] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:12.952617] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:12.961443] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:12.961473] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:12.970109] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:12.970157] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:12.978761] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:12.978793] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:12.987683] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:12.987714] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:12.996570] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:12.996601] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:13.005746] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:13.005778] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:13.015287] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:13.015331] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:13.024226] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:13.024266] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:13.033509] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:13.033545] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.798 [2024-04-24 20:08:13.042957] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.798 [2024-04-24 20:08:13.043002] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-24 20:08:13.052503] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-24 20:08:13.052538] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-24 20:08:13.061932] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-24 20:08:13.061969] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-24 20:08:13.071161] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-24 20:08:13.071201] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-24 20:08:13.080550] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-24 20:08:13.080583] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-24 20:08:13.089756] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-24 20:08:13.089791] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-24 20:08:13.099217] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-24 20:08:13.099253] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-24 20:08:13.108745] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.056 [2024-04-24 20:08:13.108786] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.056 [2024-04-24 20:08:13.115967] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.116005] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 00:16:31.057 Latency(us) 00:16:31.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.057 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:31.057 Nvme1n1 : 5.01 13500.42 105.47 0.00 0.00 9471.11 3777.62 20147.31 00:16:31.057 =================================================================================================================== 00:16:31.057 Total : 13500.42 105.47 0.00 0.00 9471.11 3777.62 20147.31 00:16:31.057 [2024-04-24 20:08:13.122965] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.122998] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.130945] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.130978] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.138940] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.138972] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.146934] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.146967] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.154913] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.154946] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.162902] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.162934] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.170884] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.170915] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.178862] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.178889] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.186851] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.186881] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.194837] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.194870] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.202821] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.202850] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.210803] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.210829] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.218791] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.218817] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.226777] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.226800] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.234759] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.234780] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.242746] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.242765] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.250731] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.250751] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.258713] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.258733] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.266725] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.266759] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.274704] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.274739] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.282680] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.282702] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.290678] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.290704] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.298680] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.298709] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.057 [2024-04-24 20:08:13.306661] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.057 [2024-04-24 20:08:13.306687] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.315 [2024-04-24 20:08:13.314680] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.315 [2024-04-24 20:08:13.314712] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.315 [2024-04-24 20:08:13.322642] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.315 [2024-04-24 20:08:13.322672] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.315 [2024-04-24 20:08:13.330615] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.315 [2024-04-24 20:08:13.330637] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.315 [2024-04-24 20:08:13.338605] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:31.315 [2024-04-24 20:08:13.338626] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.315 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68002) - No such process 00:16:31.315 20:08:13 -- target/zcopy.sh@49 -- # wait 68002 00:16:31.315 20:08:13 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.316 20:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.316 20:08:13 -- common/autotest_common.sh@10 -- # set +x 00:16:31.316 20:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.316 20:08:13 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:31.316 20:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.316 20:08:13 -- common/autotest_common.sh@10 -- # set +x 00:16:31.316 delay0 00:16:31.316 20:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.316 20:08:13 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:31.316 20:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.316 20:08:13 -- common/autotest_common.sh@10 -- # set +x 00:16:31.316 20:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.316 20:08:13 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:31.316 [2024-04-24 20:08:13.534262] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:37.927 Initializing NVMe Controllers 00:16:37.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:37.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:37.927 Initialization complete. Launching workers. 00:16:37.927 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 265, failed: 14372 00:16:37.927 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 14537, failed to submit 100 00:16:37.927 success 14457, unsuccess 80, failed 0 00:16:37.927 20:08:19 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:37.927 20:08:19 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:37.927 20:08:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:37.927 20:08:19 -- nvmf/common.sh@117 -- # sync 00:16:37.927 20:08:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:37.927 20:08:19 -- nvmf/common.sh@120 -- # set +e 00:16:37.927 20:08:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:37.927 20:08:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:37.927 rmmod nvme_tcp 00:16:37.927 rmmod nvme_fabrics 00:16:37.927 rmmod nvme_keyring 00:16:37.927 20:08:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:37.927 20:08:19 -- nvmf/common.sh@124 -- # set -e 00:16:37.927 20:08:19 -- nvmf/common.sh@125 -- # return 0 00:16:37.927 20:08:19 -- nvmf/common.sh@478 -- # '[' -n 67847 ']' 00:16:37.927 20:08:19 -- nvmf/common.sh@479 -- # killprocess 67847 00:16:37.927 20:08:19 -- common/autotest_common.sh@936 -- # '[' -z 67847 ']' 00:16:37.927 20:08:19 -- common/autotest_common.sh@940 -- # kill -0 67847 00:16:37.927 20:08:19 -- common/autotest_common.sh@941 -- # uname 00:16:37.927 20:08:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:37.927 20:08:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67847 00:16:37.927 20:08:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:37.927 20:08:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:37.927 20:08:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67847' 00:16:37.927 killing process with pid 67847 00:16:37.927 20:08:19 -- common/autotest_common.sh@955 -- # kill 67847 00:16:37.927 [2024-04-24 20:08:19.703705] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:37.927 20:08:19 -- common/autotest_common.sh@960 -- # wait 67847 00:16:37.927 20:08:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:37.927 20:08:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:37.927 20:08:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:37.927 20:08:19 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:37.927 20:08:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:37.927 20:08:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.927 20:08:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.927 20:08:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.927 20:08:19 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:37.927 00:16:37.927 real 0m24.727s 00:16:37.927 user 0m40.993s 00:16:37.927 sys 0m6.146s 00:16:37.927 20:08:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:37.927 ************************************ 00:16:37.927 END TEST nvmf_zcopy 00:16:37.927 ************************************ 00:16:37.927 20:08:20 -- common/autotest_common.sh@10 -- # set +x 00:16:37.927 20:08:20 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:37.927 20:08:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:37.927 20:08:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:37.927 20:08:20 -- common/autotest_common.sh@10 -- # set +x 00:16:38.187 ************************************ 00:16:38.187 START TEST nvmf_nmic 00:16:38.187 ************************************ 00:16:38.187 20:08:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:38.187 * Looking for test storage... 00:16:38.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:38.187 20:08:20 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:38.187 20:08:20 -- nvmf/common.sh@7 -- # uname -s 00:16:38.187 20:08:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.187 20:08:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.187 20:08:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.187 20:08:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.187 20:08:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.187 20:08:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.187 20:08:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.187 20:08:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.187 20:08:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.187 20:08:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.188 20:08:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:16:38.188 20:08:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:16:38.188 20:08:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.188 20:08:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.188 20:08:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:38.188 20:08:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.188 20:08:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:38.188 20:08:20 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.188 20:08:20 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.188 20:08:20 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.188 20:08:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.188 20:08:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.188 20:08:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.188 20:08:20 -- paths/export.sh@5 -- # export PATH 00:16:38.188 20:08:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.188 20:08:20 -- nvmf/common.sh@47 -- # : 0 00:16:38.188 20:08:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:38.188 20:08:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:38.188 20:08:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.188 20:08:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.188 20:08:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.188 20:08:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:38.188 20:08:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:38.188 20:08:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:38.188 20:08:20 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:38.188 20:08:20 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:38.188 20:08:20 -- target/nmic.sh@14 -- # nvmftestinit 00:16:38.188 20:08:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:38.188 20:08:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.188 20:08:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:38.188 20:08:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:38.188 20:08:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:38.188 20:08:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.188 20:08:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.188 20:08:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.188 20:08:20 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:38.188 20:08:20 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:38.188 20:08:20 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:38.188 20:08:20 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:38.188 20:08:20 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:38.188 20:08:20 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:38.188 20:08:20 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.188 20:08:20 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:38.188 20:08:20 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:38.188 20:08:20 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:38.188 20:08:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:38.188 20:08:20 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:38.188 20:08:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:38.188 20:08:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.188 20:08:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:38.188 20:08:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:38.188 20:08:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:38.188 20:08:20 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:38.188 20:08:20 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:38.188 20:08:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:38.188 Cannot find device "nvmf_tgt_br" 00:16:38.188 20:08:20 -- nvmf/common.sh@155 -- # true 00:16:38.188 20:08:20 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:38.188 Cannot find device "nvmf_tgt_br2" 00:16:38.188 20:08:20 -- nvmf/common.sh@156 -- # true 00:16:38.188 20:08:20 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:38.188 20:08:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:38.188 Cannot find device "nvmf_tgt_br" 00:16:38.188 20:08:20 -- nvmf/common.sh@158 -- # true 00:16:38.447 20:08:20 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:38.447 Cannot find device "nvmf_tgt_br2" 00:16:38.447 20:08:20 -- nvmf/common.sh@159 -- # true 00:16:38.447 20:08:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:38.447 20:08:20 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:38.447 20:08:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:38.447 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.447 20:08:20 -- nvmf/common.sh@162 -- # true 00:16:38.447 20:08:20 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:38.447 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:38.447 20:08:20 -- nvmf/common.sh@163 -- # true 00:16:38.447 20:08:20 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:38.447 20:08:20 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:38.447 20:08:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:38.447 20:08:20 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:38.447 20:08:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:38.447 20:08:20 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:38.447 20:08:20 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:38.447 20:08:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:38.447 20:08:20 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:38.447 20:08:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:38.447 20:08:20 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:38.447 20:08:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:38.447 20:08:20 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:38.447 20:08:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:38.447 20:08:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:38.447 20:08:20 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:38.447 20:08:20 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:38.447 20:08:20 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:38.447 20:08:20 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:38.447 20:08:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:38.447 20:08:20 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:38.447 20:08:20 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:38.447 20:08:20 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:38.447 20:08:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:38.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:38.447 00:16:38.447 --- 10.0.0.2 ping statistics --- 00:16:38.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.447 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:38.447 20:08:20 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:38.447 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:38.447 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:16:38.447 00:16:38.447 --- 10.0.0.3 ping statistics --- 00:16:38.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.447 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:38.447 20:08:20 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:38.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:16:38.447 00:16:38.447 --- 10.0.0.1 ping statistics --- 00:16:38.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.447 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:38.447 20:08:20 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.447 20:08:20 -- nvmf/common.sh@422 -- # return 0 00:16:38.447 20:08:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:38.447 20:08:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.447 20:08:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:38.447 20:08:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:38.447 20:08:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.447 20:08:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:38.447 20:08:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:38.447 20:08:20 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:38.707 20:08:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:38.707 20:08:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:38.707 20:08:20 -- common/autotest_common.sh@10 -- # set +x 00:16:38.707 20:08:20 -- nvmf/common.sh@470 -- # nvmfpid=68325 00:16:38.707 20:08:20 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:38.707 20:08:20 -- nvmf/common.sh@471 -- # waitforlisten 68325 00:16:38.707 20:08:20 -- common/autotest_common.sh@817 -- # '[' -z 68325 ']' 00:16:38.707 20:08:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.707 20:08:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:38.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.707 20:08:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.707 20:08:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:38.707 20:08:20 -- common/autotest_common.sh@10 -- # set +x 00:16:38.707 [2024-04-24 20:08:20.761599] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:16:38.707 [2024-04-24 20:08:20.761675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.707 [2024-04-24 20:08:20.902707] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:38.966 [2024-04-24 20:08:21.038053] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.966 [2024-04-24 20:08:21.038124] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.966 [2024-04-24 20:08:21.038134] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:38.966 [2024-04-24 20:08:21.038141] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:38.966 [2024-04-24 20:08:21.038148] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.966 [2024-04-24 20:08:21.038412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.966 [2024-04-24 20:08:21.038465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.966 [2024-04-24 20:08:21.038623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.966 [2024-04-24 20:08:21.038623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:39.535 20:08:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:39.535 20:08:21 -- common/autotest_common.sh@850 -- # return 0 00:16:39.535 20:08:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:39.535 20:08:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:39.535 20:08:21 -- common/autotest_common.sh@10 -- # set +x 00:16:39.535 20:08:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.535 20:08:21 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:39.535 20:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.535 20:08:21 -- common/autotest_common.sh@10 -- # set +x 00:16:39.535 [2024-04-24 20:08:21.695263] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.535 20:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.535 20:08:21 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:39.535 20:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.535 20:08:21 -- common/autotest_common.sh@10 -- # set +x 00:16:39.535 Malloc0 00:16:39.535 20:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.535 20:08:21 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:39.535 20:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.535 20:08:21 -- common/autotest_common.sh@10 -- # set +x 00:16:39.535 20:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.535 20:08:21 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:39.535 20:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.535 20:08:21 -- common/autotest_common.sh@10 -- # set +x 00:16:39.535 20:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.535 20:08:21 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:39.535 20:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.535 20:08:21 -- common/autotest_common.sh@10 -- # set +x 00:16:39.535 [2024-04-24 20:08:21.769373] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:39.535 [2024-04-24 20:08:21.769591] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.535 20:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.535 test case1: single bdev can't be used in multiple subsystems 00:16:39.535 20:08:21 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:39.535 20:08:21 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:39.535 20:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.535 20:08:21 -- common/autotest_common.sh@10 -- # set +x 00:16:39.535 20:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.535 20:08:21 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:39.535 20:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.535 20:08:21 -- common/autotest_common.sh@10 -- # set +x 00:16:39.795 20:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.795 20:08:21 -- target/nmic.sh@28 -- # nmic_status=0 00:16:39.795 20:08:21 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:39.795 20:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.795 20:08:21 -- common/autotest_common.sh@10 -- # set +x 00:16:39.795 [2024-04-24 20:08:21.805400] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:39.795 [2024-04-24 20:08:21.805428] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:39.795 [2024-04-24 20:08:21.805435] nvmf_rpc.c:1542:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.795 request: 00:16:39.795 { 00:16:39.795 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:39.795 "namespace": { 00:16:39.795 "bdev_name": "Malloc0", 00:16:39.795 "no_auto_visible": false 00:16:39.795 }, 00:16:39.795 "method": "nvmf_subsystem_add_ns", 00:16:39.795 "req_id": 1 00:16:39.795 } 00:16:39.795 Got JSON-RPC error response 00:16:39.795 response: 00:16:39.795 { 00:16:39.795 "code": -32602, 00:16:39.795 "message": "Invalid parameters" 00:16:39.795 } 00:16:39.795 20:08:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:39.795 20:08:21 -- target/nmic.sh@29 -- # nmic_status=1 00:16:39.795 20:08:21 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:39.795 Adding namespace failed - expected result. 00:16:39.795 20:08:21 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:39.795 test case2: host connect to nvmf target in multiple paths 00:16:39.795 20:08:21 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:39.795 20:08:21 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:39.795 20:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:39.795 20:08:21 -- common/autotest_common.sh@10 -- # set +x 00:16:39.795 [2024-04-24 20:08:21.821497] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:39.795 20:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:39.795 20:08:21 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf --hostid=19152f61-83a6-4d7e-88f6-d601ac0cc1cf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:39.795 20:08:21 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf --hostid=19152f61-83a6-4d7e-88f6-d601ac0cc1cf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:40.054 20:08:22 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:40.054 20:08:22 -- common/autotest_common.sh@1184 -- # local i=0 00:16:40.054 20:08:22 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:40.054 20:08:22 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:40.055 20:08:22 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:41.958 20:08:24 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:41.958 20:08:24 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:41.958 20:08:24 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:41.958 20:08:24 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:41.958 20:08:24 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:41.958 20:08:24 -- common/autotest_common.sh@1194 -- # return 0 00:16:41.958 20:08:24 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:41.958 [global] 00:16:41.958 thread=1 00:16:41.958 invalidate=1 00:16:41.958 rw=write 00:16:41.958 time_based=1 00:16:41.958 runtime=1 00:16:41.958 ioengine=libaio 00:16:41.958 direct=1 00:16:41.958 bs=4096 00:16:41.958 iodepth=1 00:16:41.958 norandommap=0 00:16:41.958 numjobs=1 00:16:41.958 00:16:41.958 verify_dump=1 00:16:41.958 verify_backlog=512 00:16:41.958 verify_state_save=0 00:16:41.958 do_verify=1 00:16:41.958 verify=crc32c-intel 00:16:41.958 [job0] 00:16:41.958 filename=/dev/nvme0n1 00:16:41.958 Could not set queue depth (nvme0n1) 00:16:42.217 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.217 fio-3.35 00:16:42.217 Starting 1 thread 00:16:43.161 00:16:43.161 job0: (groupid=0, jobs=1): err= 0: pid=68418: Wed Apr 24 20:08:25 2024 00:16:43.161 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:16:43.161 slat (nsec): min=6461, max=36847, avg=8953.11, stdev=2529.16 00:16:43.161 clat (usec): min=103, max=249, avg=152.98, stdev=17.32 00:16:43.161 lat (usec): min=110, max=258, avg=161.93, stdev=17.63 00:16:43.161 clat percentiles (usec): 00:16:43.161 | 1.00th=[ 118], 5.00th=[ 125], 10.00th=[ 130], 20.00th=[ 137], 00:16:43.161 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:16:43.161 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 172], 95.00th=[ 180], 00:16:43.161 | 99.00th=[ 198], 99.50th=[ 210], 99.90th=[ 241], 99.95th=[ 245], 00:16:43.161 | 99.99th=[ 251] 00:16:43.161 write: IOPS=3766, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1001msec); 0 zone resets 00:16:43.161 slat (usec): min=9, max=131, avg=15.25, stdev= 7.07 00:16:43.161 clat (usec): min=64, max=208, avg=93.80, stdev=13.41 00:16:43.161 lat (usec): min=76, max=339, avg=109.05, stdev=16.76 00:16:43.161 clat percentiles (usec): 00:16:43.161 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 79], 20.00th=[ 83], 00:16:43.161 | 30.00th=[ 86], 40.00th=[ 89], 50.00th=[ 93], 60.00th=[ 97], 00:16:43.161 | 70.00th=[ 100], 80.00th=[ 104], 90.00th=[ 111], 95.00th=[ 118], 00:16:43.161 | 99.00th=[ 135], 99.50th=[ 141], 99.90th=[ 163], 99.95th=[ 167], 00:16:43.162 | 99.99th=[ 208] 00:16:43.162 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:16:43.162 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:16:43.162 lat (usec) : 100=35.82%, 250=64.18% 00:16:43.162 cpu : usr=1.70%, sys=7.00%, ctx=7355, majf=0, minf=2 00:16:43.162 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.162 issued rwts: total=3584,3770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.162 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.162 00:16:43.162 Run status group 0 (all jobs): 00:16:43.162 READ: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:43.162 WRITE: bw=14.7MiB/s (15.4MB/s), 14.7MiB/s-14.7MiB/s (15.4MB/s-15.4MB/s), io=14.7MiB (15.4MB), run=1001-1001msec 00:16:43.162 00:16:43.162 Disk stats (read/write): 00:16:43.162 nvme0n1: ios=3137/3584, merge=0/0, ticks=491/352, in_queue=843, util=91.48% 00:16:43.420 20:08:25 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:43.420 20:08:25 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.420 20:08:25 -- common/autotest_common.sh@1205 -- # local i=0 00:16:43.420 20:08:25 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:43.420 20:08:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.420 20:08:25 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:43.420 20:08:25 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.420 20:08:25 -- common/autotest_common.sh@1217 -- # return 0 00:16:43.420 20:08:25 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:43.420 20:08:25 -- target/nmic.sh@53 -- # nvmftestfini 00:16:43.420 20:08:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:43.420 20:08:25 -- nvmf/common.sh@117 -- # sync 00:16:43.679 20:08:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.679 20:08:25 -- nvmf/common.sh@120 -- # set +e 00:16:43.679 20:08:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.679 20:08:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.679 rmmod nvme_tcp 00:16:43.679 rmmod nvme_fabrics 00:16:43.679 rmmod nvme_keyring 00:16:43.679 20:08:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.679 20:08:25 -- nvmf/common.sh@124 -- # set -e 00:16:43.679 20:08:25 -- nvmf/common.sh@125 -- # return 0 00:16:43.679 20:08:25 -- nvmf/common.sh@478 -- # '[' -n 68325 ']' 00:16:43.679 20:08:25 -- nvmf/common.sh@479 -- # killprocess 68325 00:16:43.679 20:08:25 -- common/autotest_common.sh@936 -- # '[' -z 68325 ']' 00:16:43.679 20:08:25 -- common/autotest_common.sh@940 -- # kill -0 68325 00:16:43.679 20:08:25 -- common/autotest_common.sh@941 -- # uname 00:16:43.679 20:08:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:43.679 20:08:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68325 00:16:43.679 20:08:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:43.679 20:08:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:43.680 killing process with pid 68325 00:16:43.680 20:08:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68325' 00:16:43.680 20:08:25 -- common/autotest_common.sh@955 -- # kill 68325 00:16:43.680 [2024-04-24 20:08:25.786779] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:43.680 20:08:25 -- common/autotest_common.sh@960 -- # wait 68325 00:16:43.939 20:08:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:43.939 20:08:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:43.939 20:08:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:43.939 20:08:26 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.939 20:08:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.939 20:08:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.939 20:08:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.939 20:08:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.939 20:08:26 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:43.939 ************************************ 00:16:43.939 END TEST nvmf_nmic 00:16:43.939 ************************************ 00:16:43.939 00:16:43.939 real 0m5.895s 00:16:43.939 user 0m19.138s 00:16:43.939 sys 0m1.884s 00:16:43.939 20:08:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:43.939 20:08:26 -- common/autotest_common.sh@10 -- # set +x 00:16:43.939 20:08:26 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:43.939 20:08:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:43.939 20:08:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.939 20:08:26 -- common/autotest_common.sh@10 -- # set +x 00:16:44.199 ************************************ 00:16:44.199 START TEST nvmf_fio_target 00:16:44.199 ************************************ 00:16:44.199 20:08:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:44.199 * Looking for test storage... 00:16:44.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:44.199 20:08:26 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:44.199 20:08:26 -- nvmf/common.sh@7 -- # uname -s 00:16:44.199 20:08:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.199 20:08:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.199 20:08:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.199 20:08:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.199 20:08:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.199 20:08:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.199 20:08:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.199 20:08:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.199 20:08:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.199 20:08:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.199 20:08:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:16:44.199 20:08:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:16:44.199 20:08:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.199 20:08:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.199 20:08:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:44.199 20:08:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.199 20:08:26 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:44.199 20:08:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.199 20:08:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.199 20:08:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.199 20:08:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.200 20:08:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.200 20:08:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.200 20:08:26 -- paths/export.sh@5 -- # export PATH 00:16:44.200 20:08:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.200 20:08:26 -- nvmf/common.sh@47 -- # : 0 00:16:44.200 20:08:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.200 20:08:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.200 20:08:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.200 20:08:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.200 20:08:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.200 20:08:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.200 20:08:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.200 20:08:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.200 20:08:26 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:44.200 20:08:26 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:44.200 20:08:26 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:44.200 20:08:26 -- target/fio.sh@16 -- # nvmftestinit 00:16:44.200 20:08:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:44.200 20:08:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.200 20:08:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:44.200 20:08:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:44.200 20:08:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:44.200 20:08:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.200 20:08:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.200 20:08:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.200 20:08:26 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:44.200 20:08:26 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:44.200 20:08:26 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:44.200 20:08:26 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:44.200 20:08:26 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:44.200 20:08:26 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:44.200 20:08:26 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.200 20:08:26 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.200 20:08:26 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:44.200 20:08:26 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:44.200 20:08:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:44.200 20:08:26 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:44.200 20:08:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:44.200 20:08:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.200 20:08:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:44.200 20:08:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:44.200 20:08:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:44.200 20:08:26 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:44.200 20:08:26 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:44.200 20:08:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:44.200 Cannot find device "nvmf_tgt_br" 00:16:44.200 20:08:26 -- nvmf/common.sh@155 -- # true 00:16:44.200 20:08:26 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:44.200 Cannot find device "nvmf_tgt_br2" 00:16:44.200 20:08:26 -- nvmf/common.sh@156 -- # true 00:16:44.200 20:08:26 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:44.200 20:08:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:44.460 Cannot find device "nvmf_tgt_br" 00:16:44.460 20:08:26 -- nvmf/common.sh@158 -- # true 00:16:44.460 20:08:26 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:44.460 Cannot find device "nvmf_tgt_br2" 00:16:44.460 20:08:26 -- nvmf/common.sh@159 -- # true 00:16:44.460 20:08:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:44.460 20:08:26 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:44.460 20:08:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:44.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.460 20:08:26 -- nvmf/common.sh@162 -- # true 00:16:44.460 20:08:26 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:44.460 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.460 20:08:26 -- nvmf/common.sh@163 -- # true 00:16:44.460 20:08:26 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:44.460 20:08:26 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:44.460 20:08:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:44.460 20:08:26 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:44.460 20:08:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:44.460 20:08:26 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:44.460 20:08:26 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:44.460 20:08:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:44.460 20:08:26 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:44.460 20:08:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:44.460 20:08:26 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:44.460 20:08:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:44.460 20:08:26 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:44.460 20:08:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:44.460 20:08:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:44.460 20:08:26 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:44.718 20:08:26 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:44.718 20:08:26 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:44.718 20:08:26 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:44.718 20:08:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:44.718 20:08:26 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:44.718 20:08:26 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:44.718 20:08:26 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:44.718 20:08:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:44.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:16:44.718 00:16:44.718 --- 10.0.0.2 ping statistics --- 00:16:44.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.718 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:44.718 20:08:26 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:44.718 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:44.718 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:16:44.718 00:16:44.718 --- 10.0.0.3 ping statistics --- 00:16:44.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.718 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:16:44.718 20:08:26 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:44.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:44.718 00:16:44.718 --- 10.0.0.1 ping statistics --- 00:16:44.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.718 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:44.718 20:08:26 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.718 20:08:26 -- nvmf/common.sh@422 -- # return 0 00:16:44.718 20:08:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:44.718 20:08:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.718 20:08:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:44.718 20:08:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:44.718 20:08:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.718 20:08:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:44.718 20:08:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:44.718 20:08:26 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:44.718 20:08:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:44.718 20:08:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:44.718 20:08:26 -- common/autotest_common.sh@10 -- # set +x 00:16:44.718 20:08:26 -- nvmf/common.sh@470 -- # nvmfpid=68600 00:16:44.718 20:08:26 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:44.718 20:08:26 -- nvmf/common.sh@471 -- # waitforlisten 68600 00:16:44.718 20:08:26 -- common/autotest_common.sh@817 -- # '[' -z 68600 ']' 00:16:44.718 20:08:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.718 20:08:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:44.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.718 20:08:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.718 20:08:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:44.718 20:08:26 -- common/autotest_common.sh@10 -- # set +x 00:16:44.718 [2024-04-24 20:08:26.903054] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:16:44.718 [2024-04-24 20:08:26.903129] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.977 [2024-04-24 20:08:27.043167] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:44.977 [2024-04-24 20:08:27.145038] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.977 [2024-04-24 20:08:27.145081] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.977 [2024-04-24 20:08:27.145089] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.977 [2024-04-24 20:08:27.145094] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.977 [2024-04-24 20:08:27.145099] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.977 [2024-04-24 20:08:27.146171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.977 [2024-04-24 20:08:27.146271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.977 [2024-04-24 20:08:27.146458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.977 [2024-04-24 20:08:27.146459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.546 20:08:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:45.546 20:08:27 -- common/autotest_common.sh@850 -- # return 0 00:16:45.546 20:08:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:45.546 20:08:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:45.546 20:08:27 -- common/autotest_common.sh@10 -- # set +x 00:16:45.805 20:08:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.805 20:08:27 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:45.805 [2024-04-24 20:08:28.002209] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.805 20:08:28 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:46.065 20:08:28 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:46.065 20:08:28 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:46.325 20:08:28 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:46.325 20:08:28 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:46.584 20:08:28 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:46.584 20:08:28 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:46.845 20:08:28 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:46.845 20:08:28 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:47.104 20:08:29 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.104 20:08:29 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:47.104 20:08:29 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.364 20:08:29 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:47.364 20:08:29 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.623 20:08:29 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:47.623 20:08:29 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:47.882 20:08:30 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:48.141 20:08:30 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:48.141 20:08:30 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:48.400 20:08:30 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:48.400 20:08:30 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:48.660 20:08:30 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.660 [2024-04-24 20:08:30.874594] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:48.660 [2024-04-24 20:08:30.875204] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.660 20:08:30 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:48.919 20:08:31 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:49.179 20:08:31 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf --hostid=19152f61-83a6-4d7e-88f6-d601ac0cc1cf -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.439 20:08:31 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:49.439 20:08:31 -- common/autotest_common.sh@1184 -- # local i=0 00:16:49.439 20:08:31 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:49.439 20:08:31 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:16:49.439 20:08:31 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:16:49.439 20:08:31 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:51.346 20:08:33 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:51.346 20:08:33 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:51.346 20:08:33 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:51.346 20:08:33 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:16:51.346 20:08:33 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:51.346 20:08:33 -- common/autotest_common.sh@1194 -- # return 0 00:16:51.346 20:08:33 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:51.346 [global] 00:16:51.346 thread=1 00:16:51.346 invalidate=1 00:16:51.346 rw=write 00:16:51.346 time_based=1 00:16:51.346 runtime=1 00:16:51.346 ioengine=libaio 00:16:51.346 direct=1 00:16:51.346 bs=4096 00:16:51.346 iodepth=1 00:16:51.346 norandommap=0 00:16:51.346 numjobs=1 00:16:51.346 00:16:51.346 verify_dump=1 00:16:51.346 verify_backlog=512 00:16:51.346 verify_state_save=0 00:16:51.346 do_verify=1 00:16:51.346 verify=crc32c-intel 00:16:51.346 [job0] 00:16:51.346 filename=/dev/nvme0n1 00:16:51.346 [job1] 00:16:51.346 filename=/dev/nvme0n2 00:16:51.346 [job2] 00:16:51.346 filename=/dev/nvme0n3 00:16:51.346 [job3] 00:16:51.346 filename=/dev/nvme0n4 00:16:51.606 Could not set queue depth (nvme0n1) 00:16:51.606 Could not set queue depth (nvme0n2) 00:16:51.606 Could not set queue depth (nvme0n3) 00:16:51.606 Could not set queue depth (nvme0n4) 00:16:51.606 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:51.606 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:51.606 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:51.606 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:51.606 fio-3.35 00:16:51.606 Starting 4 threads 00:16:52.983 00:16:52.983 job0: (groupid=0, jobs=1): err= 0: pid=68782: Wed Apr 24 20:08:34 2024 00:16:52.983 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:52.983 slat (nsec): min=6976, max=31388, avg=8240.19, stdev=1446.35 00:16:52.983 clat (usec): min=122, max=725, avg=164.61, stdev=22.61 00:16:52.983 lat (usec): min=129, max=733, avg=172.85, stdev=22.73 00:16:52.983 clat percentiles (usec): 00:16:52.983 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:16:52.983 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:16:52.983 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 194], 00:16:52.983 | 99.00th=[ 217], 99.50th=[ 227], 99.90th=[ 469], 99.95th=[ 570], 00:16:52.983 | 99.99th=[ 725] 00:16:52.983 write: IOPS=3103, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1001msec); 0 zone resets 00:16:52.983 slat (usec): min=9, max=150, avg=15.32, stdev=10.08 00:16:52.983 clat (usec): min=81, max=1387, avg=133.34, stdev=41.75 00:16:52.983 lat (usec): min=94, max=1399, avg=148.66, stdev=43.61 00:16:52.983 clat percentiles (usec): 00:16:52.983 | 1.00th=[ 100], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 116], 00:16:52.983 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 127], 60.00th=[ 130], 00:16:52.983 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 155], 95.00th=[ 172], 00:16:52.983 | 99.00th=[ 355], 99.50th=[ 383], 99.90th=[ 433], 99.95th=[ 490], 00:16:52.983 | 99.99th=[ 1385] 00:16:52.983 bw ( KiB/s): min=12312, max=12312, per=24.01%, avg=12312.00, stdev= 0.00, samples=1 00:16:52.983 iops : min= 3078, max= 3078, avg=3078.00, stdev= 0.00, samples=1 00:16:52.983 lat (usec) : 100=0.58%, 250=98.48%, 500=0.87%, 750=0.05% 00:16:52.983 lat (msec) : 2=0.02% 00:16:52.983 cpu : usr=1.10%, sys=6.00%, ctx=6179, majf=0, minf=9 00:16:52.983 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.983 issued rwts: total=3072,3107,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.983 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.983 job1: (groupid=0, jobs=1): err= 0: pid=68783: Wed Apr 24 20:08:34 2024 00:16:52.983 read: IOPS=3080, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:52.983 slat (nsec): min=7075, max=81434, avg=8921.17, stdev=3729.39 00:16:52.983 clat (usec): min=124, max=1818, avg=159.45, stdev=42.71 00:16:52.983 lat (usec): min=132, max=1841, avg=168.37, stdev=43.36 00:16:52.983 clat percentiles (usec): 00:16:52.983 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:16:52.983 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:16:52.983 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 184], 00:16:52.983 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 437], 99.95th=[ 1598], 00:16:52.983 | 99.99th=[ 1827] 00:16:52.983 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:52.983 slat (nsec): min=9418, max=97886, avg=14536.52, stdev=7004.23 00:16:52.983 clat (usec): min=85, max=317, avg=117.39, stdev=13.33 00:16:52.983 lat (usec): min=97, max=330, avg=131.92, stdev=16.11 00:16:52.983 clat percentiles (usec): 00:16:52.983 | 1.00th=[ 93], 5.00th=[ 99], 10.00th=[ 102], 20.00th=[ 106], 00:16:52.983 | 30.00th=[ 111], 40.00th=[ 114], 50.00th=[ 117], 60.00th=[ 120], 00:16:52.983 | 70.00th=[ 124], 80.00th=[ 128], 90.00th=[ 135], 95.00th=[ 141], 00:16:52.983 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 206], 00:16:52.983 | 99.99th=[ 318] 00:16:52.983 bw ( KiB/s): min=14280, max=14280, per=27.84%, avg=14280.00, stdev= 0.00, samples=1 00:16:52.983 iops : min= 3570, max= 3570, avg=3570.00, stdev= 0.00, samples=1 00:16:52.983 lat (usec) : 100=3.49%, 250=96.42%, 500=0.04%, 750=0.01% 00:16:52.983 lat (msec) : 2=0.03% 00:16:52.983 cpu : usr=1.80%, sys=5.90%, ctx=6668, majf=0, minf=5 00:16:52.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.984 issued rwts: total=3084,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.984 job2: (groupid=0, jobs=1): err= 0: pid=68787: Wed Apr 24 20:08:34 2024 00:16:52.984 read: IOPS=2925, BW=11.4MiB/s (12.0MB/s)(11.4MiB/1001msec) 00:16:52.984 slat (nsec): min=6777, max=37718, avg=9493.11, stdev=3231.59 00:16:52.984 clat (usec): min=134, max=1783, avg=172.09, stdev=35.14 00:16:52.984 lat (usec): min=142, max=1802, avg=181.58, stdev=35.55 00:16:52.984 clat percentiles (usec): 00:16:52.984 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:16:52.984 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:16:52.984 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 00:16:52.984 | 99.00th=[ 212], 99.50th=[ 217], 99.90th=[ 408], 99.95th=[ 799], 00:16:52.984 | 99.99th=[ 1778] 00:16:52.984 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:52.984 slat (usec): min=9, max=131, avg=15.27, stdev= 7.56 00:16:52.984 clat (usec): min=99, max=238, avg=134.73, stdev=14.46 00:16:52.984 lat (usec): min=112, max=326, avg=150.00, stdev=17.69 00:16:52.984 clat percentiles (usec): 00:16:52.984 | 1.00th=[ 109], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 124], 00:16:52.984 | 30.00th=[ 127], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:16:52.984 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 161], 00:16:52.984 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 208], 99.95th=[ 229], 00:16:52.984 | 99.99th=[ 239] 00:16:52.984 bw ( KiB/s): min=12288, max=12288, per=23.96%, avg=12288.00, stdev= 0.00, samples=1 00:16:52.984 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:52.984 lat (usec) : 100=0.02%, 250=99.93%, 500=0.02%, 1000=0.02% 00:16:52.984 lat (msec) : 2=0.02% 00:16:52.984 cpu : usr=2.00%, sys=5.40%, ctx=6001, majf=0, minf=12 00:16:52.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.984 issued rwts: total=2928,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.984 job3: (groupid=0, jobs=1): err= 0: pid=68788: Wed Apr 24 20:08:34 2024 00:16:52.984 read: IOPS=2865, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec) 00:16:52.984 slat (nsec): min=6832, max=31693, avg=9061.74, stdev=2575.62 00:16:52.984 clat (usec): min=135, max=1436, avg=173.39, stdev=28.39 00:16:52.984 lat (usec): min=143, max=1445, avg=182.45, stdev=28.64 00:16:52.984 clat percentiles (usec): 00:16:52.984 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 161], 00:16:52.984 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:16:52.984 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 200], 00:16:52.984 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 245], 99.95th=[ 510], 00:16:52.984 | 99.99th=[ 1434] 00:16:52.984 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:52.984 slat (usec): min=9, max=136, avg=14.45, stdev= 6.63 00:16:52.984 clat (usec): min=85, max=461, avg=138.54, stdev=17.23 00:16:52.984 lat (usec): min=98, max=477, avg=152.99, stdev=18.90 00:16:52.984 clat percentiles (usec): 00:16:52.984 | 1.00th=[ 109], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 127], 00:16:52.984 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:16:52.984 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 165], 00:16:52.984 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 253], 99.95th=[ 388], 00:16:52.984 | 99.99th=[ 461] 00:16:52.984 bw ( KiB/s): min=12288, max=12288, per=23.96%, avg=12288.00, stdev= 0.00, samples=1 00:16:52.984 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:52.984 lat (usec) : 100=0.12%, 250=99.78%, 500=0.07%, 750=0.02% 00:16:52.984 lat (msec) : 2=0.02% 00:16:52.984 cpu : usr=1.50%, sys=5.40%, ctx=5940, majf=0, minf=11 00:16:52.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.984 issued rwts: total=2868,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.984 00:16:52.984 Run status group 0 (all jobs): 00:16:52.984 READ: bw=46.6MiB/s (48.9MB/s), 11.2MiB/s-12.0MiB/s (11.7MB/s-12.6MB/s), io=46.7MiB (49.0MB), run=1001-1001msec 00:16:52.984 WRITE: bw=50.1MiB/s (52.5MB/s), 12.0MiB/s-14.0MiB/s (12.6MB/s-14.7MB/s), io=50.1MiB (52.6MB), run=1001-1001msec 00:16:52.984 00:16:52.984 Disk stats (read/write): 00:16:52.984 nvme0n1: ios=2610/2818, merge=0/0, ticks=442/395, in_queue=837, util=89.08% 00:16:52.984 nvme0n2: ios=2757/3072, merge=0/0, ticks=443/372, in_queue=815, util=90.00% 00:16:52.984 nvme0n3: ios=2560/2645, merge=0/0, ticks=452/369, in_queue=821, util=89.41% 00:16:52.984 nvme0n4: ios=2560/2591, merge=0/0, ticks=444/379, in_queue=823, util=89.78% 00:16:52.984 20:08:34 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:52.984 [global] 00:16:52.984 thread=1 00:16:52.984 invalidate=1 00:16:52.984 rw=randwrite 00:16:52.984 time_based=1 00:16:52.984 runtime=1 00:16:52.984 ioengine=libaio 00:16:52.984 direct=1 00:16:52.984 bs=4096 00:16:52.984 iodepth=1 00:16:52.984 norandommap=0 00:16:52.984 numjobs=1 00:16:52.984 00:16:52.984 verify_dump=1 00:16:52.984 verify_backlog=512 00:16:52.984 verify_state_save=0 00:16:52.984 do_verify=1 00:16:52.984 verify=crc32c-intel 00:16:52.984 [job0] 00:16:52.984 filename=/dev/nvme0n1 00:16:52.984 [job1] 00:16:52.984 filename=/dev/nvme0n2 00:16:52.984 [job2] 00:16:52.984 filename=/dev/nvme0n3 00:16:52.984 [job3] 00:16:52.984 filename=/dev/nvme0n4 00:16:52.984 Could not set queue depth (nvme0n1) 00:16:52.984 Could not set queue depth (nvme0n2) 00:16:52.984 Could not set queue depth (nvme0n3) 00:16:52.984 Could not set queue depth (nvme0n4) 00:16:52.984 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.984 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.984 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.984 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:52.984 fio-3.35 00:16:52.984 Starting 4 threads 00:16:54.358 00:16:54.358 job0: (groupid=0, jobs=1): err= 0: pid=68842: Wed Apr 24 20:08:36 2024 00:16:54.358 read: IOPS=1950, BW=7800KiB/s (7987kB/s)(7808KiB/1001msec) 00:16:54.358 slat (nsec): min=7236, max=81051, avg=13599.82, stdev=5784.75 00:16:54.358 clat (usec): min=135, max=1858, avg=274.46, stdev=65.05 00:16:54.358 lat (usec): min=144, max=1880, avg=288.06, stdev=65.84 00:16:54.358 clat percentiles (usec): 00:16:54.358 | 1.00th=[ 153], 5.00th=[ 221], 10.00th=[ 233], 20.00th=[ 241], 00:16:54.358 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:16:54.358 | 70.00th=[ 277], 80.00th=[ 310], 90.00th=[ 351], 95.00th=[ 371], 00:16:54.358 | 99.00th=[ 478], 99.50th=[ 490], 99.90th=[ 519], 99.95th=[ 1860], 00:16:54.358 | 99.99th=[ 1860] 00:16:54.358 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:54.358 slat (nsec): min=11193, max=88221, avg=20256.64, stdev=5786.21 00:16:54.358 clat (usec): min=88, max=701, avg=189.89, stdev=61.83 00:16:54.358 lat (usec): min=108, max=729, avg=210.15, stdev=63.53 00:16:54.358 clat percentiles (usec): 00:16:54.358 | 1.00th=[ 103], 5.00th=[ 112], 10.00th=[ 118], 20.00th=[ 131], 00:16:54.358 | 30.00th=[ 161], 40.00th=[ 176], 50.00th=[ 186], 60.00th=[ 194], 00:16:54.358 | 70.00th=[ 204], 80.00th=[ 219], 90.00th=[ 297], 95.00th=[ 314], 00:16:54.358 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 529], 99.95th=[ 644], 00:16:54.358 | 99.99th=[ 701] 00:16:54.358 bw ( KiB/s): min= 8192, max= 8192, per=19.54%, avg=8192.00, stdev= 0.00, samples=1 00:16:54.358 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:54.358 lat (usec) : 100=0.25%, 250=59.40%, 500=40.15%, 750=0.18% 00:16:54.358 lat (msec) : 2=0.03% 00:16:54.358 cpu : usr=1.00%, sys=5.80%, ctx=4000, majf=0, minf=7 00:16:54.358 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.358 issued rwts: total=1952,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.358 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.358 job1: (groupid=0, jobs=1): err= 0: pid=68843: Wed Apr 24 20:08:36 2024 00:16:54.358 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:54.358 slat (nsec): min=7050, max=87742, avg=11190.39, stdev=6998.75 00:16:54.358 clat (usec): min=161, max=3687, avg=285.70, stdev=118.29 00:16:54.358 lat (usec): min=169, max=3719, avg=296.89, stdev=119.19 00:16:54.358 clat percentiles (usec): 00:16:54.358 | 1.00th=[ 186], 5.00th=[ 229], 10.00th=[ 239], 20.00th=[ 247], 00:16:54.358 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:16:54.358 | 70.00th=[ 293], 80.00th=[ 326], 90.00th=[ 351], 95.00th=[ 367], 00:16:54.358 | 99.00th=[ 449], 99.50th=[ 482], 99.90th=[ 1713], 99.95th=[ 3228], 00:16:54.358 | 99.99th=[ 3687] 00:16:54.358 write: IOPS=2058, BW=8236KiB/s (8433kB/s)(8244KiB/1001msec); 0 zone resets 00:16:54.358 slat (usec): min=10, max=161, avg=16.78, stdev= 9.64 00:16:54.358 clat (usec): min=89, max=1698, avg=170.62, stdev=61.99 00:16:54.358 lat (usec): min=100, max=1711, avg=187.39, stdev=63.61 00:16:54.358 clat percentiles (usec): 00:16:54.358 | 1.00th=[ 99], 5.00th=[ 106], 10.00th=[ 112], 20.00th=[ 122], 00:16:54.358 | 30.00th=[ 135], 40.00th=[ 167], 50.00th=[ 182], 60.00th=[ 190], 00:16:54.358 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 215], 95.00th=[ 225], 00:16:54.358 | 99.00th=[ 251], 99.50th=[ 262], 99.90th=[ 355], 99.95th=[ 1647], 00:16:54.358 | 99.99th=[ 1696] 00:16:54.358 bw ( KiB/s): min= 9144, max= 9144, per=21.82%, avg=9144.00, stdev= 0.00, samples=1 00:16:54.358 iops : min= 2286, max= 2286, avg=2286.00, stdev= 0.00, samples=1 00:16:54.358 lat (usec) : 100=0.66%, 250=60.92%, 500=38.23%, 750=0.02%, 1000=0.02% 00:16:54.358 lat (msec) : 2=0.10%, 4=0.05% 00:16:54.358 cpu : usr=0.90%, sys=4.70%, ctx=4111, majf=0, minf=13 00:16:54.358 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.358 issued rwts: total=2048,2061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.358 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.359 job2: (groupid=0, jobs=1): err= 0: pid=68844: Wed Apr 24 20:08:36 2024 00:16:54.359 read: IOPS=2691, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 00:16:54.359 slat (usec): min=7, max=113, avg=10.49, stdev= 4.93 00:16:54.359 clat (usec): min=141, max=619, avg=185.26, stdev=23.43 00:16:54.359 lat (usec): min=149, max=632, avg=195.74, stdev=24.24 00:16:54.359 clat percentiles (usec): 00:16:54.359 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:16:54.359 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 190], 00:16:54.359 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 217], 95.00th=[ 223], 00:16:54.359 | 99.00th=[ 237], 99.50th=[ 243], 99.90th=[ 269], 99.95th=[ 379], 00:16:54.359 | 99.99th=[ 619] 00:16:54.359 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:54.359 slat (usec): min=10, max=128, avg=15.63, stdev= 7.74 00:16:54.359 clat (usec): min=94, max=397, avg=135.80, stdev=18.61 00:16:54.359 lat (usec): min=105, max=409, avg=151.43, stdev=22.31 00:16:54.359 clat percentiles (usec): 00:16:54.359 | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 114], 20.00th=[ 120], 00:16:54.359 | 30.00th=[ 125], 40.00th=[ 130], 50.00th=[ 135], 60.00th=[ 141], 00:16:54.359 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 165], 00:16:54.359 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 219], 99.95th=[ 285], 00:16:54.359 | 99.99th=[ 400] 00:16:54.359 bw ( KiB/s): min=12288, max=12288, per=29.32%, avg=12288.00, stdev= 0.00, samples=1 00:16:54.359 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:54.359 lat (usec) : 100=0.17%, 250=99.62%, 500=0.19%, 750=0.02% 00:16:54.359 cpu : usr=1.30%, sys=6.20%, ctx=5769, majf=0, minf=12 00:16:54.359 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.359 issued rwts: total=2694,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.359 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.359 job3: (groupid=0, jobs=1): err= 0: pid=68845: Wed Apr 24 20:08:36 2024 00:16:54.359 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:54.359 slat (usec): min=6, max=111, avg= 8.64, stdev= 4.14 00:16:54.359 clat (usec): min=134, max=538, avg=166.31, stdev=15.27 00:16:54.359 lat (usec): min=141, max=545, avg=174.95, stdev=16.71 00:16:54.359 clat percentiles (usec): 00:16:54.359 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:16:54.359 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:16:54.359 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 192], 00:16:54.359 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 237], 99.95th=[ 343], 00:16:54.359 | 99.99th=[ 537] 00:16:54.359 write: IOPS=3304, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1001msec); 0 zone resets 00:16:54.359 slat (usec): min=10, max=168, avg=15.27, stdev=10.29 00:16:54.359 clat (usec): min=41, max=310, avg=122.16, stdev=14.52 00:16:54.359 lat (usec): min=101, max=334, avg=137.44, stdev=19.35 00:16:54.359 clat percentiles (usec): 00:16:54.359 | 1.00th=[ 99], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 112], 00:16:54.359 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 121], 60.00th=[ 124], 00:16:54.359 | 70.00th=[ 127], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 145], 00:16:54.359 | 99.00th=[ 163], 99.50th=[ 188], 99.90th=[ 255], 99.95th=[ 293], 00:16:54.359 | 99.99th=[ 310] 00:16:54.359 bw ( KiB/s): min=13232, max=13232, per=31.57%, avg=13232.00, stdev= 0.00, samples=1 00:16:54.359 iops : min= 3308, max= 3308, avg=3308.00, stdev= 0.00, samples=1 00:16:54.359 lat (usec) : 50=0.02%, 100=0.66%, 250=99.22%, 500=0.09%, 750=0.02% 00:16:54.359 cpu : usr=1.10%, sys=6.50%, ctx=6409, majf=0, minf=13 00:16:54.359 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.359 issued rwts: total=3072,3308,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.359 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.359 00:16:54.359 Run status group 0 (all jobs): 00:16:54.359 READ: bw=38.1MiB/s (40.0MB/s), 7800KiB/s-12.0MiB/s (7987kB/s-12.6MB/s), io=38.1MiB (40.0MB), run=1001-1001msec 00:16:54.359 WRITE: bw=40.9MiB/s (42.9MB/s), 8184KiB/s-12.9MiB/s (8380kB/s-13.5MB/s), io=41.0MiB (43.0MB), run=1001-1001msec 00:16:54.359 00:16:54.359 Disk stats (read/write): 00:16:54.359 nvme0n1: ios=1586/2000, merge=0/0, ticks=438/390, in_queue=828, util=89.48% 00:16:54.359 nvme0n2: ios=1667/2048, merge=0/0, ticks=492/353, in_queue=845, util=89.22% 00:16:54.359 nvme0n3: ios=2445/2560, merge=0/0, ticks=486/373, in_queue=859, util=90.74% 00:16:54.359 nvme0n4: ios=2587/3069, merge=0/0, ticks=467/392, in_queue=859, util=90.70% 00:16:54.359 20:08:36 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:54.359 [global] 00:16:54.359 thread=1 00:16:54.359 invalidate=1 00:16:54.359 rw=write 00:16:54.359 time_based=1 00:16:54.359 runtime=1 00:16:54.359 ioengine=libaio 00:16:54.359 direct=1 00:16:54.359 bs=4096 00:16:54.359 iodepth=128 00:16:54.359 norandommap=0 00:16:54.359 numjobs=1 00:16:54.359 00:16:54.359 verify_dump=1 00:16:54.359 verify_backlog=512 00:16:54.359 verify_state_save=0 00:16:54.359 do_verify=1 00:16:54.359 verify=crc32c-intel 00:16:54.359 [job0] 00:16:54.359 filename=/dev/nvme0n1 00:16:54.359 [job1] 00:16:54.359 filename=/dev/nvme0n2 00:16:54.359 [job2] 00:16:54.359 filename=/dev/nvme0n3 00:16:54.359 [job3] 00:16:54.359 filename=/dev/nvme0n4 00:16:54.359 Could not set queue depth (nvme0n1) 00:16:54.359 Could not set queue depth (nvme0n2) 00:16:54.359 Could not set queue depth (nvme0n3) 00:16:54.359 Could not set queue depth (nvme0n4) 00:16:54.359 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.359 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.359 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.359 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.359 fio-3.35 00:16:54.359 Starting 4 threads 00:16:55.735 00:16:55.735 job0: (groupid=0, jobs=1): err= 0: pid=68904: Wed Apr 24 20:08:37 2024 00:16:55.735 read: IOPS=2879, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1002msec) 00:16:55.735 slat (usec): min=4, max=7918, avg=165.12, stdev=823.25 00:16:55.735 clat (usec): min=1073, max=27776, avg=21394.20, stdev=2842.35 00:16:55.736 lat (usec): min=1096, max=27805, avg=21559.32, stdev=2729.82 00:16:55.736 clat percentiles (usec): 00:16:55.736 | 1.00th=[ 6456], 5.00th=[17171], 10.00th=[19006], 20.00th=[20579], 00:16:55.736 | 30.00th=[21103], 40.00th=[21103], 50.00th=[21365], 60.00th=[21890], 00:16:55.736 | 70.00th=[22152], 80.00th=[22938], 90.00th=[23987], 95.00th=[25035], 00:16:55.736 | 99.00th=[27395], 99.50th=[27657], 99.90th=[27657], 99.95th=[27657], 00:16:55.736 | 99.99th=[27657] 00:16:55.736 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:16:55.736 slat (usec): min=14, max=9415, avg=161.21, stdev=743.98 00:16:55.736 clat (usec): min=11845, max=30124, avg=21037.16, stdev=2794.74 00:16:55.736 lat (usec): min=13886, max=30176, avg=21198.37, stdev=2712.92 00:16:55.736 clat percentiles (usec): 00:16:55.736 | 1.00th=[15008], 5.00th=[16581], 10.00th=[17957], 20.00th=[18744], 00:16:55.736 | 30.00th=[20317], 40.00th=[20579], 50.00th=[20841], 60.00th=[21365], 00:16:55.736 | 70.00th=[21890], 80.00th=[22152], 90.00th=[24511], 95.00th=[26608], 00:16:55.736 | 99.00th=[29492], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:16:55.736 | 99.99th=[30016] 00:16:55.736 bw ( KiB/s): min=12288, max=12312, per=25.12%, avg=12300.00, stdev=16.97, samples=2 00:16:55.736 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:16:55.736 lat (msec) : 2=0.08%, 10=0.54%, 20=18.25%, 50=81.13% 00:16:55.736 cpu : usr=2.70%, sys=12.29%, ctx=189, majf=0, minf=4 00:16:55.736 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:16:55.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.736 issued rwts: total=2885,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.736 job1: (groupid=0, jobs=1): err= 0: pid=68905: Wed Apr 24 20:08:37 2024 00:16:55.736 read: IOPS=2906, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1002msec) 00:16:55.736 slat (usec): min=6, max=6064, avg=164.29, stdev=627.44 00:16:55.736 clat (usec): min=1327, max=26581, avg=21005.40, stdev=2923.20 00:16:55.736 lat (usec): min=1345, max=26614, avg=21169.69, stdev=2878.64 00:16:55.736 clat percentiles (usec): 00:16:55.736 | 1.00th=[ 6652], 5.00th=[17695], 10.00th=[19530], 20.00th=[20841], 00:16:55.736 | 30.00th=[21103], 40.00th=[21365], 50.00th=[21365], 60.00th=[21627], 00:16:55.736 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22676], 95.00th=[23725], 00:16:55.736 | 99.00th=[25297], 99.50th=[25822], 99.90th=[26608], 99.95th=[26608], 00:16:55.736 | 99.99th=[26608] 00:16:55.736 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:16:55.736 slat (usec): min=9, max=5852, avg=164.19, stdev=793.54 00:16:55.736 clat (usec): min=15012, max=24907, avg=21167.89, stdev=1298.23 00:16:55.736 lat (usec): min=15659, max=27880, avg=21332.08, stdev=1092.33 00:16:55.736 clat percentiles (usec): 00:16:55.736 | 1.00th=[16319], 5.00th=[19268], 10.00th=[20055], 20.00th=[20317], 00:16:55.736 | 30.00th=[20579], 40.00th=[20841], 50.00th=[21365], 60.00th=[21627], 00:16:55.736 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22414], 95.00th=[22938], 00:16:55.736 | 99.00th=[23987], 99.50th=[24511], 99.90th=[24773], 99.95th=[25035], 00:16:55.736 | 99.99th=[25035] 00:16:55.736 bw ( KiB/s): min=12288, max=12288, per=25.10%, avg=12288.00, stdev= 0.00, samples=1 00:16:55.736 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:55.736 lat (msec) : 2=0.42%, 4=0.03%, 10=0.53%, 20=9.84%, 50=89.17% 00:16:55.736 cpu : usr=1.80%, sys=7.99%, ctx=529, majf=0, minf=7 00:16:55.736 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:16:55.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.736 issued rwts: total=2912,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.736 job2: (groupid=0, jobs=1): err= 0: pid=68906: Wed Apr 24 20:08:37 2024 00:16:55.736 read: IOPS=2971, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1002msec) 00:16:55.736 slat (usec): min=14, max=8064, avg=168.40, stdev=825.65 00:16:55.736 clat (usec): min=371, max=29058, avg=20985.11, stdev=3096.89 00:16:55.736 lat (usec): min=3304, max=29077, avg=21153.50, stdev=3016.14 00:16:55.736 clat percentiles (usec): 00:16:55.736 | 1.00th=[ 3982], 5.00th=[16450], 10.00th=[18220], 20.00th=[20579], 00:16:55.736 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21365], 60.00th=[21627], 00:16:55.736 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22938], 95.00th=[26084], 00:16:55.736 | 99.00th=[28967], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:16:55.736 | 99.99th=[28967] 00:16:55.736 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:16:55.736 slat (usec): min=19, max=6323, avg=152.16, stdev=668.10 00:16:55.736 clat (usec): min=11751, max=28676, avg=20637.49, stdev=2692.28 00:16:55.736 lat (usec): min=13681, max=28728, avg=20789.65, stdev=2605.51 00:16:55.736 clat percentiles (usec): 00:16:55.736 | 1.00th=[13698], 5.00th=[15926], 10.00th=[16450], 20.00th=[18482], 00:16:55.736 | 30.00th=[20317], 40.00th=[20579], 50.00th=[20841], 60.00th=[21365], 00:16:55.736 | 70.00th=[21627], 80.00th=[22152], 90.00th=[22676], 95.00th=[26346], 00:16:55.736 | 99.00th=[28443], 99.50th=[28443], 99.90th=[28705], 99.95th=[28705], 00:16:55.736 | 99.99th=[28705] 00:16:55.736 bw ( KiB/s): min=12288, max=12312, per=25.12%, avg=12300.00, stdev=16.97, samples=2 00:16:55.736 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:16:55.736 lat (usec) : 500=0.02% 00:16:55.736 lat (msec) : 4=0.48%, 10=0.58%, 20=19.09%, 50=79.83% 00:16:55.736 cpu : usr=4.20%, sys=12.19%, ctx=193, majf=0, minf=5 00:16:55.736 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:55.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.736 issued rwts: total=2977,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.736 job3: (groupid=0, jobs=1): err= 0: pid=68907: Wed Apr 24 20:08:37 2024 00:16:55.736 read: IOPS=2933, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1004msec) 00:16:55.736 slat (usec): min=6, max=6403, avg=164.89, stdev=679.52 00:16:55.736 clat (usec): min=1757, max=25010, avg=20919.11, stdev=2291.60 00:16:55.736 lat (usec): min=6222, max=25021, avg=21084.00, stdev=2208.39 00:16:55.736 clat percentiles (usec): 00:16:55.736 | 1.00th=[ 6915], 5.00th=[17433], 10.00th=[18482], 20.00th=[20317], 00:16:55.736 | 30.00th=[21103], 40.00th=[21103], 50.00th=[21365], 60.00th=[21627], 00:16:55.736 | 70.00th=[21890], 80.00th=[22152], 90.00th=[22414], 95.00th=[23200], 00:16:55.736 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25035], 99.95th=[25035], 00:16:55.736 | 99.99th=[25035] 00:16:55.736 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:16:55.736 slat (usec): min=9, max=5725, avg=162.17, stdev=783.04 00:16:55.736 clat (usec): min=14439, max=27226, avg=21124.67, stdev=1621.79 00:16:55.736 lat (usec): min=15159, max=27243, avg=21286.84, stdev=1457.68 00:16:55.736 clat percentiles (usec): 00:16:55.736 | 1.00th=[16319], 5.00th=[18220], 10.00th=[19268], 20.00th=[20317], 00:16:55.736 | 30.00th=[20317], 40.00th=[20841], 50.00th=[21103], 60.00th=[21627], 00:16:55.736 | 70.00th=[22152], 80.00th=[22152], 90.00th=[22676], 95.00th=[23200], 00:16:55.736 | 99.00th=[25822], 99.50th=[26346], 99.90th=[27132], 99.95th=[27132], 00:16:55.736 | 99.99th=[27132] 00:16:55.736 bw ( KiB/s): min=12288, max=12288, per=25.10%, avg=12288.00, stdev= 0.00, samples=2 00:16:55.736 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:16:55.736 lat (msec) : 2=0.02%, 10=0.53%, 20=15.99%, 50=83.46% 00:16:55.736 cpu : usr=1.69%, sys=8.28%, ctx=502, majf=0, minf=5 00:16:55.736 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:55.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.736 issued rwts: total=2945,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.736 00:16:55.736 Run status group 0 (all jobs): 00:16:55.736 READ: bw=45.6MiB/s (47.8MB/s), 11.2MiB/s-11.6MiB/s (11.8MB/s-12.2MB/s), io=45.8MiB (48.0MB), run=1002-1004msec 00:16:55.736 WRITE: bw=47.8MiB/s (50.1MB/s), 12.0MiB/s-12.0MiB/s (12.5MB/s-12.6MB/s), io=48.0MiB (50.3MB), run=1002-1004msec 00:16:55.736 00:16:55.736 Disk stats (read/write): 00:16:55.736 nvme0n1: ios=2610/2624, merge=0/0, ticks=13005/11846, in_queue=24851, util=88.87% 00:16:55.736 nvme0n2: ios=2609/2656, merge=0/0, ticks=13035/12122, in_queue=25157, util=89.83% 00:16:55.736 nvme0n3: ios=2595/2784, merge=0/0, ticks=12911/11963, in_queue=24874, util=90.63% 00:16:55.736 nvme0n4: ios=2587/2729, merge=0/0, ticks=13102/12211, in_queue=25313, util=90.60% 00:16:55.736 20:08:37 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:55.736 [global] 00:16:55.736 thread=1 00:16:55.736 invalidate=1 00:16:55.736 rw=randwrite 00:16:55.736 time_based=1 00:16:55.736 runtime=1 00:16:55.736 ioengine=libaio 00:16:55.736 direct=1 00:16:55.736 bs=4096 00:16:55.736 iodepth=128 00:16:55.736 norandommap=0 00:16:55.736 numjobs=1 00:16:55.736 00:16:55.736 verify_dump=1 00:16:55.736 verify_backlog=512 00:16:55.736 verify_state_save=0 00:16:55.736 do_verify=1 00:16:55.736 verify=crc32c-intel 00:16:55.736 [job0] 00:16:55.736 filename=/dev/nvme0n1 00:16:55.736 [job1] 00:16:55.736 filename=/dev/nvme0n2 00:16:55.736 [job2] 00:16:55.736 filename=/dev/nvme0n3 00:16:55.736 [job3] 00:16:55.736 filename=/dev/nvme0n4 00:16:55.736 Could not set queue depth (nvme0n1) 00:16:55.736 Could not set queue depth (nvme0n2) 00:16:55.736 Could not set queue depth (nvme0n3) 00:16:55.736 Could not set queue depth (nvme0n4) 00:16:55.736 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.736 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.736 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.737 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.737 fio-3.35 00:16:55.737 Starting 4 threads 00:16:57.112 00:16:57.112 job0: (groupid=0, jobs=1): err= 0: pid=68961: Wed Apr 24 20:08:39 2024 00:16:57.112 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:16:57.112 slat (usec): min=10, max=6938, avg=112.10, stdev=516.25 00:16:57.112 clat (usec): min=10103, max=18584, avg=15100.79, stdev=938.59 00:16:57.112 lat (usec): min=12930, max=18632, avg=15212.89, stdev=792.34 00:16:57.112 clat percentiles (usec): 00:16:57.112 | 1.00th=[11994], 5.00th=[13173], 10.00th=[14353], 20.00th=[14615], 00:16:57.112 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:16:57.112 | 70.00th=[15401], 80.00th=[15664], 90.00th=[15795], 95.00th=[16057], 00:16:57.112 | 99.00th=[18482], 99.50th=[18482], 99.90th=[18482], 99.95th=[18482], 00:16:57.112 | 99.99th=[18482] 00:16:57.112 write: IOPS=4508, BW=17.6MiB/s (18.5MB/s)(17.6MiB/1001msec); 0 zone resets 00:16:57.112 slat (usec): min=13, max=3224, avg=110.15, stdev=417.80 00:16:57.112 clat (usec): min=230, max=15707, avg=14283.46, stdev=1299.52 00:16:57.112 lat (usec): min=2804, max=16662, avg=14393.61, stdev=1230.62 00:16:57.112 clat percentiles (usec): 00:16:57.112 | 1.00th=[ 6980], 5.00th=[12649], 10.00th=[13829], 20.00th=[14091], 00:16:57.112 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14484], 60.00th=[14615], 00:16:57.112 | 70.00th=[14746], 80.00th=[14877], 90.00th=[15008], 95.00th=[15139], 00:16:57.112 | 99.00th=[15533], 99.50th=[15664], 99.90th=[15664], 99.95th=[15664], 00:16:57.112 | 99.99th=[15664] 00:16:57.112 bw ( KiB/s): min=17082, max=17082, per=24.77%, avg=17082.00, stdev= 0.00, samples=1 00:16:57.112 iops : min= 4272, max= 4272, avg=4272.00, stdev= 0.00, samples=1 00:16:57.112 lat (usec) : 250=0.01% 00:16:57.112 lat (msec) : 4=0.36%, 10=0.38%, 20=99.24% 00:16:57.112 cpu : usr=5.10%, sys=17.70%, ctx=274, majf=0, minf=11 00:16:57.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:57.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:57.112 issued rwts: total=4096,4513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:57.112 job1: (groupid=0, jobs=1): err= 0: pid=68962: Wed Apr 24 20:08:39 2024 00:16:57.112 read: IOPS=4197, BW=16.4MiB/s (17.2MB/s)(16.4MiB/1002msec) 00:16:57.112 slat (usec): min=7, max=5524, avg=110.34, stdev=428.67 00:16:57.112 clat (usec): min=701, max=19990, avg=14511.06, stdev=1596.26 00:16:57.112 lat (usec): min=3137, max=20031, avg=14621.41, stdev=1628.49 00:16:57.112 clat percentiles (usec): 00:16:57.112 | 1.00th=[ 7767], 5.00th=[12911], 10.00th=[13304], 20.00th=[13960], 00:16:57.112 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14484], 60.00th=[14615], 00:16:57.112 | 70.00th=[14877], 80.00th=[15139], 90.00th=[16319], 95.00th=[16909], 00:16:57.112 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19006], 99.95th=[19268], 00:16:57.112 | 99.99th=[20055] 00:16:57.112 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:16:57.112 slat (usec): min=15, max=3988, avg=107.13, stdev=457.94 00:16:57.112 clat (usec): min=10975, max=18329, avg=14213.54, stdev=925.78 00:16:57.112 lat (usec): min=11005, max=18380, avg=14320.67, stdev=1021.91 00:16:57.112 clat percentiles (usec): 00:16:57.112 | 1.00th=[12125], 5.00th=[12911], 10.00th=[13173], 20.00th=[13566], 00:16:57.112 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14222], 60.00th=[14353], 00:16:57.112 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15008], 95.00th=[16188], 00:16:57.112 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:16:57.112 | 99.99th=[18220] 00:16:57.112 bw ( KiB/s): min=17819, max=18936, per=26.64%, avg=18377.50, stdev=789.84, samples=2 00:16:57.112 iops : min= 4454, max= 4734, avg=4594.00, stdev=197.99, samples=2 00:16:57.112 lat (usec) : 750=0.01% 00:16:57.112 lat (msec) : 4=0.31%, 10=0.48%, 20=99.21% 00:16:57.112 cpu : usr=4.80%, sys=17.68%, ctx=326, majf=0, minf=13 00:16:57.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:57.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:57.112 issued rwts: total=4206,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:57.112 job2: (groupid=0, jobs=1): err= 0: pid=68963: Wed Apr 24 20:08:39 2024 00:16:57.112 read: IOPS=3641, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1002msec) 00:16:57.112 slat (usec): min=4, max=9344, avg=126.76, stdev=612.10 00:16:57.112 clat (usec): min=332, max=26039, avg=16611.37, stdev=2227.94 00:16:57.112 lat (usec): min=3701, max=26124, avg=16738.13, stdev=2149.94 00:16:57.112 clat percentiles (usec): 00:16:57.112 | 1.00th=[ 7832], 5.00th=[15270], 10.00th=[15533], 20.00th=[15926], 00:16:57.112 | 30.00th=[16188], 40.00th=[16450], 50.00th=[16581], 60.00th=[16909], 00:16:57.112 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[19530], 00:16:57.112 | 99.00th=[25822], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:16:57.112 | 99.99th=[26084] 00:16:57.112 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:16:57.112 slat (usec): min=6, max=5685, avg=122.74, stdev=513.40 00:16:57.112 clat (usec): min=10869, max=19567, avg=16048.27, stdev=990.42 00:16:57.112 lat (usec): min=11422, max=19575, avg=16171.01, stdev=856.09 00:16:57.112 clat percentiles (usec): 00:16:57.112 | 1.00th=[12518], 5.00th=[14091], 10.00th=[15401], 20.00th=[15664], 00:16:57.112 | 30.00th=[15795], 40.00th=[15926], 50.00th=[16057], 60.00th=[16319], 00:16:57.112 | 70.00th=[16450], 80.00th=[16712], 90.00th=[16909], 95.00th=[17171], 00:16:57.112 | 99.00th=[19268], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:16:57.112 | 99.99th=[19530] 00:16:57.112 bw ( KiB/s): min=15880, max=16384, per=23.39%, avg=16132.00, stdev=356.38, samples=2 00:16:57.112 iops : min= 3970, max= 4096, avg=4033.00, stdev=89.10, samples=2 00:16:57.112 lat (usec) : 500=0.01% 00:16:57.112 lat (msec) : 4=0.17%, 10=0.66%, 20=97.39%, 50=1.77% 00:16:57.112 cpu : usr=4.50%, sys=14.19%, ctx=265, majf=0, minf=11 00:16:57.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:57.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:57.112 issued rwts: total=3649,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:57.112 job3: (groupid=0, jobs=1): err= 0: pid=68964: Wed Apr 24 20:08:39 2024 00:16:57.112 read: IOPS=3757, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1004msec) 00:16:57.112 slat (usec): min=8, max=7986, avg=123.86, stdev=576.03 00:16:57.112 clat (usec): min=2791, max=20948, avg=16184.99, stdev=1820.35 00:16:57.112 lat (usec): min=2809, max=20968, avg=16308.86, stdev=1742.76 00:16:57.112 clat percentiles (usec): 00:16:57.112 | 1.00th=[ 6915], 5.00th=[13698], 10.00th=[15270], 20.00th=[15795], 00:16:57.112 | 30.00th=[16057], 40.00th=[16188], 50.00th=[16319], 60.00th=[16581], 00:16:57.112 | 70.00th=[16909], 80.00th=[16909], 90.00th=[17171], 95.00th=[17695], 00:16:57.112 | 99.00th=[20841], 99.50th=[20841], 99.90th=[20841], 99.95th=[20841], 00:16:57.112 | 99.99th=[20841] 00:16:57.112 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:16:57.112 slat (usec): min=6, max=3807, avg=120.63, stdev=495.09 00:16:57.112 clat (usec): min=11335, max=21083, avg=16003.72, stdev=916.85 00:16:57.112 lat (usec): min=12221, max=21150, avg=16124.35, stdev=766.35 00:16:57.112 clat percentiles (usec): 00:16:57.112 | 1.00th=[12911], 5.00th=[14877], 10.00th=[15270], 20.00th=[15533], 00:16:57.112 | 30.00th=[15795], 40.00th=[15926], 50.00th=[16057], 60.00th=[16188], 00:16:57.112 | 70.00th=[16319], 80.00th=[16581], 90.00th=[16712], 95.00th=[16909], 00:16:57.112 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21103], 99.95th=[21103], 00:16:57.112 | 99.99th=[21103] 00:16:57.112 bw ( KiB/s): min=16351, max=16384, per=23.73%, avg=16367.50, stdev=23.33, samples=2 00:16:57.112 iops : min= 4087, max= 4096, avg=4091.50, stdev= 6.36, samples=2 00:16:57.112 lat (msec) : 4=0.37%, 10=0.41%, 20=97.65%, 50=1.58% 00:16:57.112 cpu : usr=4.79%, sys=15.15%, ctx=248, majf=0, minf=10 00:16:57.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:57.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:57.112 issued rwts: total=3773,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:57.112 00:16:57.112 Run status group 0 (all jobs): 00:16:57.112 READ: bw=61.2MiB/s (64.1MB/s), 14.2MiB/s-16.4MiB/s (14.9MB/s-17.2MB/s), io=61.4MiB (64.4MB), run=1001-1004msec 00:16:57.112 WRITE: bw=67.4MiB/s (70.6MB/s), 15.9MiB/s-18.0MiB/s (16.7MB/s-18.8MB/s), io=67.6MiB (70.9MB), run=1001-1004msec 00:16:57.112 00:16:57.112 Disk stats (read/write): 00:16:57.112 nvme0n1: ios=3634/3968, merge=0/0, ticks=11878/11614, in_queue=23492, util=89.39% 00:16:57.112 nvme0n2: ios=3701/4096, merge=0/0, ticks=16710/15315, in_queue=32025, util=90.95% 00:16:57.112 nvme0n3: ios=3254/3584, merge=0/0, ticks=11985/12137, in_queue=24122, util=90.45% 00:16:57.112 nvme0n4: ios=3336/3584, merge=0/0, ticks=12072/11948, in_queue=24020, util=90.92% 00:16:57.112 20:08:39 -- target/fio.sh@55 -- # sync 00:16:57.112 20:08:39 -- target/fio.sh@59 -- # fio_pid=68977 00:16:57.112 20:08:39 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:57.112 20:08:39 -- target/fio.sh@61 -- # sleep 3 00:16:57.112 [global] 00:16:57.113 thread=1 00:16:57.113 invalidate=1 00:16:57.113 rw=read 00:16:57.113 time_based=1 00:16:57.113 runtime=10 00:16:57.113 ioengine=libaio 00:16:57.113 direct=1 00:16:57.113 bs=4096 00:16:57.113 iodepth=1 00:16:57.113 norandommap=1 00:16:57.113 numjobs=1 00:16:57.113 00:16:57.113 [job0] 00:16:57.113 filename=/dev/nvme0n1 00:16:57.113 [job1] 00:16:57.113 filename=/dev/nvme0n2 00:16:57.113 [job2] 00:16:57.113 filename=/dev/nvme0n3 00:16:57.113 [job3] 00:16:57.113 filename=/dev/nvme0n4 00:16:57.113 Could not set queue depth (nvme0n1) 00:16:57.113 Could not set queue depth (nvme0n2) 00:16:57.113 Could not set queue depth (nvme0n3) 00:16:57.113 Could not set queue depth (nvme0n4) 00:16:57.113 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:57.113 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:57.113 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:57.113 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:57.113 fio-3.35 00:16:57.113 Starting 4 threads 00:17:00.396 20:08:42 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:00.396 fio: pid=69024, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:00.396 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=49065984, buflen=4096 00:17:00.396 20:08:42 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:00.396 fio: pid=69023, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:00.396 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=47214592, buflen=4096 00:17:00.396 20:08:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:00.396 20:08:42 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:00.655 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=57896960, buflen=4096 00:17:00.655 fio: pid=69021, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:00.655 20:08:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:00.655 20:08:42 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:00.916 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=58224640, buflen=4096 00:17:00.916 fio: pid=69022, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:00.916 00:17:00.916 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69021: Wed Apr 24 20:08:43 2024 00:17:00.916 read: IOPS=4318, BW=16.9MiB/s (17.7MB/s)(55.2MiB/3273msec) 00:17:00.916 slat (usec): min=4, max=11838, avg=11.30, stdev=169.59 00:17:00.916 clat (usec): min=102, max=4216, avg=219.31, stdev=53.46 00:17:00.916 lat (usec): min=112, max=12105, avg=230.61, stdev=177.67 00:17:00.916 clat percentiles (usec): 00:17:00.916 | 1.00th=[ 131], 5.00th=[ 145], 10.00th=[ 169], 20.00th=[ 206], 00:17:00.916 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 229], 00:17:00.916 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 258], 00:17:00.916 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 383], 99.95th=[ 611], 00:17:00.916 | 99.99th=[ 1844] 00:17:00.916 bw ( KiB/s): min=16648, max=18626, per=29.23%, avg=17193.17, stdev=723.78, samples=6 00:17:00.916 iops : min= 4162, max= 4656, avg=4298.17, stdev=180.75, samples=6 00:17:00.916 lat (usec) : 250=91.59%, 500=8.34%, 750=0.01%, 1000=0.01% 00:17:00.916 lat (msec) : 2=0.04%, 10=0.01% 00:17:00.916 cpu : usr=0.73%, sys=3.51%, ctx=14144, majf=0, minf=1 00:17:00.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:00.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.916 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.916 issued rwts: total=14136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:00.916 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69022: Wed Apr 24 20:08:43 2024 00:17:00.916 read: IOPS=4030, BW=15.7MiB/s (16.5MB/s)(55.5MiB/3527msec) 00:17:00.916 slat (usec): min=6, max=15862, avg=14.11, stdev=230.18 00:17:00.916 clat (usec): min=91, max=1675, avg=233.07, stdev=54.75 00:17:00.916 lat (usec): min=97, max=16069, avg=247.17, stdev=236.28 00:17:00.916 clat percentiles (usec): 00:17:00.916 | 1.00th=[ 106], 5.00th=[ 121], 10.00th=[ 135], 20.00th=[ 210], 00:17:00.916 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:17:00.916 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:17:00.916 | 99.00th=[ 310], 99.50th=[ 338], 99.90th=[ 396], 99.95th=[ 807], 00:17:00.916 | 99.99th=[ 1598] 00:17:00.916 bw ( KiB/s): min=14872, max=15802, per=25.85%, avg=15204.50, stdev=324.92, samples=6 00:17:00.916 iops : min= 3718, max= 3950, avg=3801.00, stdev=81.03, samples=6 00:17:00.916 lat (usec) : 100=0.34%, 250=53.57%, 500=46.03%, 1000=0.04% 00:17:00.916 lat (msec) : 2=0.03% 00:17:00.916 cpu : usr=0.37%, sys=3.60%, ctx=14229, majf=0, minf=1 00:17:00.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:00.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.916 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.916 issued rwts: total=14216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:00.916 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69023: Wed Apr 24 20:08:43 2024 00:17:00.916 read: IOPS=3778, BW=14.8MiB/s (15.5MB/s)(45.0MiB/3051msec) 00:17:00.916 slat (usec): min=6, max=8071, avg=10.12, stdev=105.81 00:17:00.916 clat (usec): min=123, max=1699, avg=253.65, stdev=31.88 00:17:00.916 lat (usec): min=131, max=8354, avg=263.77, stdev=110.81 00:17:00.916 clat percentiles (usec): 00:17:00.916 | 1.00th=[ 167], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 239], 00:17:00.916 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:17:00.916 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:17:00.916 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 457], 99.95th=[ 537], 00:17:00.916 | 99.99th=[ 1434] 00:17:00.916 bw ( KiB/s): min=14872, max=15321, per=25.71%, avg=15120.20, stdev=175.10, samples=5 00:17:00.916 iops : min= 3718, max= 3830, avg=3780.00, stdev=43.70, samples=5 00:17:00.916 lat (usec) : 250=41.85%, 500=58.07%, 750=0.03% 00:17:00.916 lat (msec) : 2=0.03% 00:17:00.916 cpu : usr=0.49%, sys=3.18%, ctx=11530, majf=0, minf=1 00:17:00.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:00.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.916 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.916 issued rwts: total=11528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:00.916 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69024: Wed Apr 24 20:08:43 2024 00:17:00.916 read: IOPS=4215, BW=16.5MiB/s (17.3MB/s)(46.8MiB/2842msec) 00:17:00.916 slat (usec): min=4, max=105, avg= 7.38, stdev= 3.29 00:17:00.916 clat (usec): min=128, max=1825, avg=229.22, stdev=26.68 00:17:00.916 lat (usec): min=141, max=1833, avg=236.60, stdev=27.15 00:17:00.916 clat percentiles (usec): 00:17:00.916 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 217], 00:17:00.916 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:17:00.916 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 260], 00:17:00.916 | 99.00th=[ 277], 99.50th=[ 293], 99.90th=[ 363], 99.95th=[ 416], 00:17:00.916 | 99.99th=[ 1713] 00:17:00.916 bw ( KiB/s): min=16681, max=17173, per=28.76%, avg=16911.60, stdev=187.71, samples=5 00:17:00.916 iops : min= 4170, max= 4293, avg=4227.80, stdev=46.92, samples=5 00:17:00.916 lat (usec) : 250=89.94%, 500=10.03%, 750=0.01% 00:17:00.916 lat (msec) : 2=0.02% 00:17:00.916 cpu : usr=0.46%, sys=3.20%, ctx=11984, majf=0, minf=2 00:17:00.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:00.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.916 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.916 issued rwts: total=11980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.916 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:00.917 00:17:00.917 Run status group 0 (all jobs): 00:17:00.917 READ: bw=57.4MiB/s (60.2MB/s), 14.8MiB/s-16.9MiB/s (15.5MB/s-17.7MB/s), io=203MiB (212MB), run=2842-3527msec 00:17:00.917 00:17:00.917 Disk stats (read/write): 00:17:00.917 nvme0n1: ios=13501/0, merge=0/0, ticks=2916/0, in_queue=2916, util=95.35% 00:17:00.917 nvme0n2: ios=13142/0, merge=0/0, ticks=3218/0, in_queue=3218, util=95.22% 00:17:00.917 nvme0n3: ios=10952/0, merge=0/0, ticks=2806/0, in_queue=2806, util=96.68% 00:17:00.917 nvme0n4: ios=11116/0, merge=0/0, ticks=2480/0, in_queue=2480, util=96.52% 00:17:00.917 20:08:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:00.917 20:08:43 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:01.175 20:08:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:01.175 20:08:43 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:01.433 20:08:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:01.433 20:08:43 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:01.691 20:08:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:01.691 20:08:43 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:01.691 20:08:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:01.691 20:08:43 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:01.949 20:08:44 -- target/fio.sh@69 -- # fio_status=0 00:17:01.949 20:08:44 -- target/fio.sh@70 -- # wait 68977 00:17:01.949 20:08:44 -- target/fio.sh@70 -- # fio_status=4 00:17:01.949 20:08:44 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:01.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:01.949 20:08:44 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:01.949 20:08:44 -- common/autotest_common.sh@1205 -- # local i=0 00:17:01.949 20:08:44 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:01.949 20:08:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:01.949 20:08:44 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:01.949 20:08:44 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:01.949 20:08:44 -- common/autotest_common.sh@1217 -- # return 0 00:17:01.949 20:08:44 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:01.949 nvmf hotplug test: fio failed as expected 00:17:01.949 20:08:44 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:01.949 20:08:44 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:02.207 20:08:44 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:02.207 20:08:44 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:02.207 20:08:44 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:02.207 20:08:44 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:02.207 20:08:44 -- target/fio.sh@91 -- # nvmftestfini 00:17:02.207 20:08:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:02.207 20:08:44 -- nvmf/common.sh@117 -- # sync 00:17:02.207 20:08:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:02.207 20:08:44 -- nvmf/common.sh@120 -- # set +e 00:17:02.207 20:08:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:02.207 20:08:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:02.207 rmmod nvme_tcp 00:17:02.207 rmmod nvme_fabrics 00:17:02.548 rmmod nvme_keyring 00:17:02.548 20:08:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:02.548 20:08:44 -- nvmf/common.sh@124 -- # set -e 00:17:02.548 20:08:44 -- nvmf/common.sh@125 -- # return 0 00:17:02.548 20:08:44 -- nvmf/common.sh@478 -- # '[' -n 68600 ']' 00:17:02.548 20:08:44 -- nvmf/common.sh@479 -- # killprocess 68600 00:17:02.548 20:08:44 -- common/autotest_common.sh@936 -- # '[' -z 68600 ']' 00:17:02.548 20:08:44 -- common/autotest_common.sh@940 -- # kill -0 68600 00:17:02.548 20:08:44 -- common/autotest_common.sh@941 -- # uname 00:17:02.548 20:08:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:02.548 20:08:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68600 00:17:02.548 killing process with pid 68600 00:17:02.548 20:08:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:02.548 20:08:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:02.548 20:08:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68600' 00:17:02.548 20:08:44 -- common/autotest_common.sh@955 -- # kill 68600 00:17:02.548 [2024-04-24 20:08:44.514517] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:02.548 20:08:44 -- common/autotest_common.sh@960 -- # wait 68600 00:17:02.548 20:08:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:02.548 20:08:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:02.548 20:08:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:02.548 20:08:44 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:02.548 20:08:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:02.548 20:08:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.548 20:08:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.548 20:08:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.548 20:08:44 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:02.831 00:17:02.831 real 0m18.581s 00:17:02.831 user 1m10.867s 00:17:02.831 sys 0m8.329s 00:17:02.831 20:08:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:02.831 20:08:44 -- common/autotest_common.sh@10 -- # set +x 00:17:02.831 ************************************ 00:17:02.831 END TEST nvmf_fio_target 00:17:02.831 ************************************ 00:17:02.831 20:08:44 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:02.831 20:08:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:02.831 20:08:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:02.831 20:08:44 -- common/autotest_common.sh@10 -- # set +x 00:17:02.831 ************************************ 00:17:02.831 START TEST nvmf_bdevio 00:17:02.831 ************************************ 00:17:02.831 20:08:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:02.831 * Looking for test storage... 00:17:02.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:02.831 20:08:45 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:02.831 20:08:45 -- nvmf/common.sh@7 -- # uname -s 00:17:02.831 20:08:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.831 20:08:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.831 20:08:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.831 20:08:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.831 20:08:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.831 20:08:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.831 20:08:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.831 20:08:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.831 20:08:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.831 20:08:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.831 20:08:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:17:02.831 20:08:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:17:02.831 20:08:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.831 20:08:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.831 20:08:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:02.831 20:08:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.831 20:08:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:02.831 20:08:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.831 20:08:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.831 20:08:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.831 20:08:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.831 20:08:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.831 20:08:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.832 20:08:45 -- paths/export.sh@5 -- # export PATH 00:17:02.832 20:08:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.832 20:08:45 -- nvmf/common.sh@47 -- # : 0 00:17:02.832 20:08:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:02.832 20:08:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:02.832 20:08:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.832 20:08:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.832 20:08:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.832 20:08:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:02.832 20:08:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:02.832 20:08:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:02.832 20:08:45 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:02.832 20:08:45 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:02.832 20:08:45 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:02.832 20:08:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:02.832 20:08:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.832 20:08:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:02.832 20:08:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:02.832 20:08:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:02.832 20:08:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.832 20:08:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.832 20:08:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.090 20:08:45 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:03.090 20:08:45 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:03.090 20:08:45 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:03.090 20:08:45 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:03.090 20:08:45 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:03.090 20:08:45 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:03.090 20:08:45 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.090 20:08:45 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.090 20:08:45 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:03.090 20:08:45 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:03.090 20:08:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:03.090 20:08:45 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:03.090 20:08:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:03.090 20:08:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.090 20:08:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:03.090 20:08:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:03.090 20:08:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:03.090 20:08:45 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:03.090 20:08:45 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:03.090 20:08:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:03.090 Cannot find device "nvmf_tgt_br" 00:17:03.090 20:08:45 -- nvmf/common.sh@155 -- # true 00:17:03.090 20:08:45 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:03.090 Cannot find device "nvmf_tgt_br2" 00:17:03.090 20:08:45 -- nvmf/common.sh@156 -- # true 00:17:03.090 20:08:45 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:03.090 20:08:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:03.090 Cannot find device "nvmf_tgt_br" 00:17:03.090 20:08:45 -- nvmf/common.sh@158 -- # true 00:17:03.090 20:08:45 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:03.090 Cannot find device "nvmf_tgt_br2" 00:17:03.090 20:08:45 -- nvmf/common.sh@159 -- # true 00:17:03.090 20:08:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:03.090 20:08:45 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:03.090 20:08:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:03.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.090 20:08:45 -- nvmf/common.sh@162 -- # true 00:17:03.090 20:08:45 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:03.090 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.090 20:08:45 -- nvmf/common.sh@163 -- # true 00:17:03.090 20:08:45 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:03.090 20:08:45 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:03.090 20:08:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:03.090 20:08:45 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:03.090 20:08:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:03.090 20:08:45 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:03.090 20:08:45 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:03.090 20:08:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:03.090 20:08:45 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:03.090 20:08:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:03.349 20:08:45 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:03.349 20:08:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:03.349 20:08:45 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:03.349 20:08:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:03.349 20:08:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:03.349 20:08:45 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:03.349 20:08:45 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:03.349 20:08:45 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:03.349 20:08:45 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:03.349 20:08:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:03.349 20:08:45 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:03.349 20:08:45 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:03.349 20:08:45 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:03.349 20:08:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:03.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:17:03.349 00:17:03.349 --- 10.0.0.2 ping statistics --- 00:17:03.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.349 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:17:03.349 20:08:45 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:03.349 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:03.349 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:17:03.349 00:17:03.349 --- 10.0.0.3 ping statistics --- 00:17:03.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.349 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:17:03.349 20:08:45 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:03.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:17:03.349 00:17:03.349 --- 10.0.0.1 ping statistics --- 00:17:03.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.349 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:03.349 20:08:45 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.349 20:08:45 -- nvmf/common.sh@422 -- # return 0 00:17:03.349 20:08:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:03.349 20:08:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.349 20:08:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:03.349 20:08:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:03.349 20:08:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.349 20:08:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:03.349 20:08:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:03.349 20:08:45 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:03.349 20:08:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:03.349 20:08:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:03.349 20:08:45 -- common/autotest_common.sh@10 -- # set +x 00:17:03.349 20:08:45 -- nvmf/common.sh@470 -- # nvmfpid=69293 00:17:03.349 20:08:45 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:03.349 20:08:45 -- nvmf/common.sh@471 -- # waitforlisten 69293 00:17:03.349 20:08:45 -- common/autotest_common.sh@817 -- # '[' -z 69293 ']' 00:17:03.349 20:08:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.349 20:08:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:03.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.349 20:08:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.349 20:08:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:03.349 20:08:45 -- common/autotest_common.sh@10 -- # set +x 00:17:03.349 [2024-04-24 20:08:45.532631] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:03.349 [2024-04-24 20:08:45.532693] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.607 [2024-04-24 20:08:45.672690] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.607 [2024-04-24 20:08:45.764917] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.607 [2024-04-24 20:08:45.764966] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.607 [2024-04-24 20:08:45.764973] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.607 [2024-04-24 20:08:45.764978] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.607 [2024-04-24 20:08:45.764982] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.607 [2024-04-24 20:08:45.765148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:03.607 [2024-04-24 20:08:45.765320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:03.607 [2024-04-24 20:08:45.765439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.607 [2024-04-24 20:08:45.765445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:04.172 20:08:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:04.172 20:08:46 -- common/autotest_common.sh@850 -- # return 0 00:17:04.172 20:08:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:04.172 20:08:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:04.172 20:08:46 -- common/autotest_common.sh@10 -- # set +x 00:17:04.172 20:08:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.172 20:08:46 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:04.172 20:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.172 20:08:46 -- common/autotest_common.sh@10 -- # set +x 00:17:04.172 [2024-04-24 20:08:46.417656] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.430 20:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.430 20:08:46 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:04.430 20:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.430 20:08:46 -- common/autotest_common.sh@10 -- # set +x 00:17:04.430 Malloc0 00:17:04.430 20:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.430 20:08:46 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:04.430 20:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.430 20:08:46 -- common/autotest_common.sh@10 -- # set +x 00:17:04.430 20:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.430 20:08:46 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:04.430 20:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.430 20:08:46 -- common/autotest_common.sh@10 -- # set +x 00:17:04.430 20:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.430 20:08:46 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.430 20:08:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.430 20:08:46 -- common/autotest_common.sh@10 -- # set +x 00:17:04.430 [2024-04-24 20:08:46.481216] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:04.430 [2024-04-24 20:08:46.481464] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.430 20:08:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.430 20:08:46 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:04.430 20:08:46 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:04.430 20:08:46 -- nvmf/common.sh@521 -- # config=() 00:17:04.430 20:08:46 -- nvmf/common.sh@521 -- # local subsystem config 00:17:04.430 20:08:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:04.430 20:08:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:04.430 { 00:17:04.430 "params": { 00:17:04.430 "name": "Nvme$subsystem", 00:17:04.430 "trtype": "$TEST_TRANSPORT", 00:17:04.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:04.430 "adrfam": "ipv4", 00:17:04.430 "trsvcid": "$NVMF_PORT", 00:17:04.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:04.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:04.430 "hdgst": ${hdgst:-false}, 00:17:04.430 "ddgst": ${ddgst:-false} 00:17:04.430 }, 00:17:04.430 "method": "bdev_nvme_attach_controller" 00:17:04.430 } 00:17:04.430 EOF 00:17:04.430 )") 00:17:04.430 20:08:46 -- nvmf/common.sh@543 -- # cat 00:17:04.430 20:08:46 -- nvmf/common.sh@545 -- # jq . 00:17:04.430 20:08:46 -- nvmf/common.sh@546 -- # IFS=, 00:17:04.430 20:08:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:04.430 "params": { 00:17:04.430 "name": "Nvme1", 00:17:04.430 "trtype": "tcp", 00:17:04.430 "traddr": "10.0.0.2", 00:17:04.430 "adrfam": "ipv4", 00:17:04.430 "trsvcid": "4420", 00:17:04.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:04.430 "hdgst": false, 00:17:04.430 "ddgst": false 00:17:04.430 }, 00:17:04.430 "method": "bdev_nvme_attach_controller" 00:17:04.430 }' 00:17:04.430 [2024-04-24 20:08:46.536775] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:04.430 [2024-04-24 20:08:46.536836] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69330 ] 00:17:04.430 [2024-04-24 20:08:46.679079] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:04.689 [2024-04-24 20:08:46.775743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.689 [2024-04-24 20:08:46.775948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.689 [2024-04-24 20:08:46.775950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.689 I/O targets: 00:17:04.689 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:04.689 00:17:04.689 00:17:04.689 CUnit - A unit testing framework for C - Version 2.1-3 00:17:04.689 http://cunit.sourceforge.net/ 00:17:04.689 00:17:04.689 00:17:04.689 Suite: bdevio tests on: Nvme1n1 00:17:04.947 Test: blockdev write read block ...passed 00:17:04.947 Test: blockdev write zeroes read block ...passed 00:17:04.947 Test: blockdev write zeroes read no split ...passed 00:17:04.947 Test: blockdev write zeroes read split ...passed 00:17:04.947 Test: blockdev write zeroes read split partial ...passed 00:17:04.947 Test: blockdev reset ...[2024-04-24 20:08:46.963266] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:04.947 [2024-04-24 20:08:46.963413] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c37660 (9): Bad file descriptor 00:17:04.947 [2024-04-24 20:08:46.982903] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:04.947 passed 00:17:04.947 Test: blockdev write read 8 blocks ...passed 00:17:04.947 Test: blockdev write read size > 128k ...passed 00:17:04.947 Test: blockdev write read invalid size ...passed 00:17:04.947 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:04.947 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:04.947 Test: blockdev write read max offset ...passed 00:17:04.947 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:04.947 Test: blockdev writev readv 8 blocks ...passed 00:17:04.947 Test: blockdev writev readv 30 x 1block ...passed 00:17:04.947 Test: blockdev writev readv block ...passed 00:17:04.947 Test: blockdev writev readv size > 128k ...passed 00:17:04.947 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:04.947 Test: blockdev comparev and writev ...[2024-04-24 20:08:46.990540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:04.947 [2024-04-24 20:08:46.990580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:04.947 [2024-04-24 20:08:46.990596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:04.947 [2024-04-24 20:08:46.990603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.947 [2024-04-24 20:08:46.990919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:04.947 [2024-04-24 20:08:46.990934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:04.947 [2024-04-24 20:08:46.990946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:04.947 [2024-04-24 20:08:46.990953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:04.947 [2024-04-24 20:08:46.991239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:04.947 [2024-04-24 20:08:46.991253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:04.947 [2024-04-24 20:08:46.991265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:04.947 [2024-04-24 20:08:46.991272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:04.947 [2024-04-24 20:08:46.991573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:04.947 [2024-04-24 20:08:46.991587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:04.947 [2024-04-24 20:08:46.991599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:04.947 [2024-04-24 20:08:46.991605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:04.947 passed 00:17:04.947 Test: blockdev nvme passthru rw ...passed 00:17:04.947 Test: blockdev nvme passthru vendor specific ...[2024-04-24 20:08:46.992322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.947 [2024-04-24 20:08:46.992343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:04.947 [2024-04-24 20:08:46.992459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.947 [2024-04-24 20:08:46.992472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:04.947 [2024-04-24 20:08:46.992568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.947 [2024-04-24 20:08:46.992580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:04.947 [2024-04-24 20:08:46.992679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:04.947 [2024-04-24 20:08:46.992691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:04.947 passed 00:17:04.947 Test: blockdev nvme admin passthru ...passed 00:17:04.947 Test: blockdev copy ...passed 00:17:04.947 00:17:04.947 Run Summary: Type Total Ran Passed Failed Inactive 00:17:04.947 suites 1 1 n/a 0 0 00:17:04.947 tests 23 23 23 0 0 00:17:04.947 asserts 152 152 152 0 n/a 00:17:04.947 00:17:04.947 Elapsed time = 0.145 seconds 00:17:05.206 20:08:47 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.206 20:08:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.206 20:08:47 -- common/autotest_common.sh@10 -- # set +x 00:17:05.206 20:08:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.206 20:08:47 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:05.206 20:08:47 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:05.206 20:08:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:05.206 20:08:47 -- nvmf/common.sh@117 -- # sync 00:17:05.206 20:08:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.206 20:08:47 -- nvmf/common.sh@120 -- # set +e 00:17:05.206 20:08:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.206 20:08:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.206 rmmod nvme_tcp 00:17:05.206 rmmod nvme_fabrics 00:17:05.206 rmmod nvme_keyring 00:17:05.206 20:08:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.206 20:08:47 -- nvmf/common.sh@124 -- # set -e 00:17:05.206 20:08:47 -- nvmf/common.sh@125 -- # return 0 00:17:05.206 20:08:47 -- nvmf/common.sh@478 -- # '[' -n 69293 ']' 00:17:05.206 20:08:47 -- nvmf/common.sh@479 -- # killprocess 69293 00:17:05.206 20:08:47 -- common/autotest_common.sh@936 -- # '[' -z 69293 ']' 00:17:05.206 20:08:47 -- common/autotest_common.sh@940 -- # kill -0 69293 00:17:05.206 20:08:47 -- common/autotest_common.sh@941 -- # uname 00:17:05.206 20:08:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:05.206 20:08:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69293 00:17:05.206 20:08:47 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:05.206 20:08:47 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:05.206 20:08:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69293' 00:17:05.206 killing process with pid 69293 00:17:05.206 20:08:47 -- common/autotest_common.sh@955 -- # kill 69293 00:17:05.206 [2024-04-24 20:08:47.335456] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:05.206 20:08:47 -- common/autotest_common.sh@960 -- # wait 69293 00:17:05.465 20:08:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:05.465 20:08:47 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:05.465 20:08:47 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:05.465 20:08:47 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:05.465 20:08:47 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:05.465 20:08:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.465 20:08:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.465 20:08:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.465 20:08:47 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:05.465 00:17:05.465 real 0m2.703s 00:17:05.465 user 0m8.396s 00:17:05.465 sys 0m0.744s 00:17:05.465 20:08:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:05.465 20:08:47 -- common/autotest_common.sh@10 -- # set +x 00:17:05.465 ************************************ 00:17:05.465 END TEST nvmf_bdevio 00:17:05.465 ************************************ 00:17:05.465 20:08:47 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:17:05.465 20:08:47 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:05.465 20:08:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:05.465 20:08:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:05.465 20:08:47 -- common/autotest_common.sh@10 -- # set +x 00:17:05.725 ************************************ 00:17:05.725 START TEST nvmf_bdevio_no_huge 00:17:05.725 ************************************ 00:17:05.725 20:08:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:05.725 * Looking for test storage... 00:17:05.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:05.725 20:08:47 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:05.725 20:08:47 -- nvmf/common.sh@7 -- # uname -s 00:17:05.725 20:08:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.725 20:08:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.725 20:08:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.725 20:08:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.725 20:08:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.725 20:08:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.725 20:08:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.725 20:08:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.725 20:08:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.725 20:08:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.725 20:08:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:17:05.725 20:08:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:17:05.725 20:08:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.725 20:08:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.725 20:08:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:05.725 20:08:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.725 20:08:47 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:05.725 20:08:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.725 20:08:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.725 20:08:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.725 20:08:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.725 20:08:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.725 20:08:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.725 20:08:47 -- paths/export.sh@5 -- # export PATH 00:17:05.725 20:08:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.725 20:08:47 -- nvmf/common.sh@47 -- # : 0 00:17:05.725 20:08:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:05.725 20:08:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:05.725 20:08:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.725 20:08:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.725 20:08:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.725 20:08:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:05.725 20:08:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:05.725 20:08:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:05.725 20:08:47 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:05.725 20:08:47 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:05.725 20:08:47 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:05.725 20:08:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:05.725 20:08:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.725 20:08:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:05.725 20:08:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:05.725 20:08:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:05.725 20:08:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.725 20:08:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.725 20:08:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.725 20:08:47 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:05.725 20:08:47 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:05.725 20:08:47 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:05.725 20:08:47 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:05.725 20:08:47 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:05.725 20:08:47 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:05.725 20:08:47 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:05.725 20:08:47 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:05.725 20:08:47 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:05.725 20:08:47 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:05.725 20:08:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:05.725 20:08:47 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:05.725 20:08:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:05.725 20:08:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:05.725 20:08:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:05.725 20:08:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:05.725 20:08:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:05.725 20:08:47 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:05.725 20:08:47 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:05.725 20:08:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:05.725 Cannot find device "nvmf_tgt_br" 00:17:05.725 20:08:47 -- nvmf/common.sh@155 -- # true 00:17:05.725 20:08:47 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:05.725 Cannot find device "nvmf_tgt_br2" 00:17:05.725 20:08:47 -- nvmf/common.sh@156 -- # true 00:17:05.725 20:08:47 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:05.725 20:08:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:05.983 Cannot find device "nvmf_tgt_br" 00:17:05.983 20:08:47 -- nvmf/common.sh@158 -- # true 00:17:05.983 20:08:47 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:05.983 Cannot find device "nvmf_tgt_br2" 00:17:05.983 20:08:48 -- nvmf/common.sh@159 -- # true 00:17:05.983 20:08:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:05.983 20:08:48 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:05.984 20:08:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:05.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.984 20:08:48 -- nvmf/common.sh@162 -- # true 00:17:05.984 20:08:48 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:05.984 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:05.984 20:08:48 -- nvmf/common.sh@163 -- # true 00:17:05.984 20:08:48 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:05.984 20:08:48 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:05.984 20:08:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:05.984 20:08:48 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:05.984 20:08:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:05.984 20:08:48 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:05.984 20:08:48 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:05.984 20:08:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:05.984 20:08:48 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:05.984 20:08:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:05.984 20:08:48 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:05.984 20:08:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:05.984 20:08:48 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:05.984 20:08:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:05.984 20:08:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:05.984 20:08:48 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:05.984 20:08:48 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:05.984 20:08:48 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:05.984 20:08:48 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:05.984 20:08:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:05.984 20:08:48 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:05.984 20:08:48 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:05.984 20:08:48 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:05.984 20:08:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:05.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:05.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:17:05.984 00:17:05.984 --- 10.0.0.2 ping statistics --- 00:17:05.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.984 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:05.984 20:08:48 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:05.984 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:05.984 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:17:05.984 00:17:05.984 --- 10.0.0.3 ping statistics --- 00:17:05.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.984 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:05.984 20:08:48 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:05.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:05.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:05.984 00:17:05.984 --- 10.0.0.1 ping statistics --- 00:17:05.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:05.984 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:05.984 20:08:48 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:05.984 20:08:48 -- nvmf/common.sh@422 -- # return 0 00:17:05.984 20:08:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:05.984 20:08:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:05.984 20:08:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:05.984 20:08:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:05.984 20:08:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:05.984 20:08:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:05.984 20:08:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:06.243 20:08:48 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:06.243 20:08:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:06.243 20:08:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:06.243 20:08:48 -- common/autotest_common.sh@10 -- # set +x 00:17:06.243 20:08:48 -- nvmf/common.sh@470 -- # nvmfpid=69515 00:17:06.243 20:08:48 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:06.243 20:08:48 -- nvmf/common.sh@471 -- # waitforlisten 69515 00:17:06.243 20:08:48 -- common/autotest_common.sh@817 -- # '[' -z 69515 ']' 00:17:06.243 20:08:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.243 20:08:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:06.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.243 20:08:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.243 20:08:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:06.243 20:08:48 -- common/autotest_common.sh@10 -- # set +x 00:17:06.243 [2024-04-24 20:08:48.326484] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:06.243 [2024-04-24 20:08:48.326564] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:06.243 [2024-04-24 20:08:48.462957] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.502 [2024-04-24 20:08:48.564695] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.502 [2024-04-24 20:08:48.564752] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.502 [2024-04-24 20:08:48.564759] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.502 [2024-04-24 20:08:48.564764] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.502 [2024-04-24 20:08:48.564769] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.502 [2024-04-24 20:08:48.564925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:06.502 [2024-04-24 20:08:48.565032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:06.502 [2024-04-24 20:08:48.565137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.502 [2024-04-24 20:08:48.565142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:07.070 20:08:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:07.070 20:08:49 -- common/autotest_common.sh@850 -- # return 0 00:17:07.070 20:08:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:07.070 20:08:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:07.070 20:08:49 -- common/autotest_common.sh@10 -- # set +x 00:17:07.070 20:08:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.070 20:08:49 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:07.070 20:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.070 20:08:49 -- common/autotest_common.sh@10 -- # set +x 00:17:07.070 [2024-04-24 20:08:49.260333] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.070 20:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.070 20:08:49 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:07.070 20:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.070 20:08:49 -- common/autotest_common.sh@10 -- # set +x 00:17:07.070 Malloc0 00:17:07.070 20:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.070 20:08:49 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:07.070 20:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.070 20:08:49 -- common/autotest_common.sh@10 -- # set +x 00:17:07.070 20:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.070 20:08:49 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:07.070 20:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.070 20:08:49 -- common/autotest_common.sh@10 -- # set +x 00:17:07.070 20:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.070 20:08:49 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.070 20:08:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.070 20:08:49 -- common/autotest_common.sh@10 -- # set +x 00:17:07.070 [2024-04-24 20:08:49.302106] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:07.070 [2024-04-24 20:08:49.302374] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.070 20:08:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.070 20:08:49 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:07.070 20:08:49 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:07.070 20:08:49 -- nvmf/common.sh@521 -- # config=() 00:17:07.070 20:08:49 -- nvmf/common.sh@521 -- # local subsystem config 00:17:07.070 20:08:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:07.070 20:08:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:07.070 { 00:17:07.070 "params": { 00:17:07.070 "name": "Nvme$subsystem", 00:17:07.070 "trtype": "$TEST_TRANSPORT", 00:17:07.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.070 "adrfam": "ipv4", 00:17:07.070 "trsvcid": "$NVMF_PORT", 00:17:07.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.070 "hdgst": ${hdgst:-false}, 00:17:07.070 "ddgst": ${ddgst:-false} 00:17:07.070 }, 00:17:07.070 "method": "bdev_nvme_attach_controller" 00:17:07.070 } 00:17:07.070 EOF 00:17:07.070 )") 00:17:07.070 20:08:49 -- nvmf/common.sh@543 -- # cat 00:17:07.070 20:08:49 -- nvmf/common.sh@545 -- # jq . 00:17:07.329 20:08:49 -- nvmf/common.sh@546 -- # IFS=, 00:17:07.329 20:08:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:07.329 "params": { 00:17:07.329 "name": "Nvme1", 00:17:07.329 "trtype": "tcp", 00:17:07.329 "traddr": "10.0.0.2", 00:17:07.329 "adrfam": "ipv4", 00:17:07.329 "trsvcid": "4420", 00:17:07.329 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.329 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.329 "hdgst": false, 00:17:07.329 "ddgst": false 00:17:07.329 }, 00:17:07.329 "method": "bdev_nvme_attach_controller" 00:17:07.329 }' 00:17:07.329 [2024-04-24 20:08:49.355130] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:07.329 [2024-04-24 20:08:49.355198] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid69551 ] 00:17:07.329 [2024-04-24 20:08:49.488561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:07.595 [2024-04-24 20:08:49.612689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.595 [2024-04-24 20:08:49.612758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.595 [2024-04-24 20:08:49.612761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.595 I/O targets: 00:17:07.595 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:07.595 00:17:07.595 00:17:07.595 CUnit - A unit testing framework for C - Version 2.1-3 00:17:07.595 http://cunit.sourceforge.net/ 00:17:07.595 00:17:07.595 00:17:07.595 Suite: bdevio tests on: Nvme1n1 00:17:07.595 Test: blockdev write read block ...passed 00:17:07.595 Test: blockdev write zeroes read block ...passed 00:17:07.595 Test: blockdev write zeroes read no split ...passed 00:17:07.595 Test: blockdev write zeroes read split ...passed 00:17:07.595 Test: blockdev write zeroes read split partial ...passed 00:17:07.595 Test: blockdev reset ...[2024-04-24 20:08:49.809942] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:07.595 [2024-04-24 20:08:49.810067] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142e450 (9): Bad file descriptor 00:17:07.595 passed 00:17:07.595 Test: blockdev write read 8 blocks ...[2024-04-24 20:08:49.829922] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:07.595 passed 00:17:07.595 Test: blockdev write read size > 128k ...passed 00:17:07.595 Test: blockdev write read invalid size ...passed 00:17:07.595 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:07.595 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:07.595 Test: blockdev write read max offset ...passed 00:17:07.595 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:07.595 Test: blockdev writev readv 8 blocks ...passed 00:17:07.595 Test: blockdev writev readv 30 x 1block ...passed 00:17:07.595 Test: blockdev writev readv block ...passed 00:17:07.595 Test: blockdev writev readv size > 128k ...passed 00:17:07.595 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:07.595 Test: blockdev comparev and writev ...[2024-04-24 20:08:49.837868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.595 [2024-04-24 20:08:49.837909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.595 [2024-04-24 20:08:49.837923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.595 [2024-04-24 20:08:49.837931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:07.595 [2024-04-24 20:08:49.838224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.595 [2024-04-24 20:08:49.838242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:07.595 [2024-04-24 20:08:49.838255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.595 [2024-04-24 20:08:49.838262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:07.595 [2024-04-24 20:08:49.838525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.595 [2024-04-24 20:08:49.838543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:07.595 [2024-04-24 20:08:49.838555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.595 [2024-04-24 20:08:49.838562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:07.595 [2024-04-24 20:08:49.838838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.595 [2024-04-24 20:08:49.838855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:07.595 [2024-04-24 20:08:49.838868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.595 [2024-04-24 20:08:49.838875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:07.595 passed 00:17:07.595 Test: blockdev nvme passthru rw ...passed 00:17:07.595 Test: blockdev nvme passthru vendor specific ...[2024-04-24 20:08:49.839747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:07.595 [2024-04-24 20:08:49.839769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:07.595 [2024-04-24 20:08:49.839865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:07.595 [2024-04-24 20:08:49.839885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:07.595 [2024-04-24 20:08:49.839991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:07.596 [2024-04-24 20:08:49.840004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:07.596 [2024-04-24 20:08:49.840105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:07.596 [2024-04-24 20:08:49.840118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:07.596 passed 00:17:07.865 Test: blockdev nvme admin passthru ...passed 00:17:07.865 Test: blockdev copy ...passed 00:17:07.865 00:17:07.865 Run Summary: Type Total Ran Passed Failed Inactive 00:17:07.865 suites 1 1 n/a 0 0 00:17:07.865 tests 23 23 23 0 0 00:17:07.865 asserts 152 152 152 0 n/a 00:17:07.865 00:17:07.865 Elapsed time = 0.211 seconds 00:17:08.123 20:08:50 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:08.123 20:08:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.123 20:08:50 -- common/autotest_common.sh@10 -- # set +x 00:17:08.123 20:08:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.123 20:08:50 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:08.123 20:08:50 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:08.123 20:08:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:08.123 20:08:50 -- nvmf/common.sh@117 -- # sync 00:17:08.123 20:08:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:08.123 20:08:50 -- nvmf/common.sh@120 -- # set +e 00:17:08.123 20:08:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:08.123 20:08:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:08.123 rmmod nvme_tcp 00:17:08.123 rmmod nvme_fabrics 00:17:08.123 rmmod nvme_keyring 00:17:08.123 20:08:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:08.123 20:08:50 -- nvmf/common.sh@124 -- # set -e 00:17:08.123 20:08:50 -- nvmf/common.sh@125 -- # return 0 00:17:08.123 20:08:50 -- nvmf/common.sh@478 -- # '[' -n 69515 ']' 00:17:08.123 20:08:50 -- nvmf/common.sh@479 -- # killprocess 69515 00:17:08.123 20:08:50 -- common/autotest_common.sh@936 -- # '[' -z 69515 ']' 00:17:08.123 20:08:50 -- common/autotest_common.sh@940 -- # kill -0 69515 00:17:08.123 20:08:50 -- common/autotest_common.sh@941 -- # uname 00:17:08.123 20:08:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:08.123 20:08:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69515 00:17:08.123 20:08:50 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:08.123 20:08:50 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:08.123 killing process with pid 69515 00:17:08.123 20:08:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69515' 00:17:08.123 20:08:50 -- common/autotest_common.sh@955 -- # kill 69515 00:17:08.123 [2024-04-24 20:08:50.359717] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:08.123 20:08:50 -- common/autotest_common.sh@960 -- # wait 69515 00:17:08.691 20:08:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:08.691 20:08:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:08.692 20:08:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:08.692 20:08:50 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.692 20:08:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:08.692 20:08:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.692 20:08:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.692 20:08:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.692 20:08:50 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:08.692 00:17:08.692 real 0m2.995s 00:17:08.692 user 0m9.623s 00:17:08.692 sys 0m1.194s 00:17:08.692 20:08:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:08.692 20:08:50 -- common/autotest_common.sh@10 -- # set +x 00:17:08.692 ************************************ 00:17:08.692 END TEST nvmf_bdevio_no_huge 00:17:08.692 ************************************ 00:17:08.692 20:08:50 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:08.692 20:08:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:08.692 20:08:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:08.692 20:08:50 -- common/autotest_common.sh@10 -- # set +x 00:17:08.692 ************************************ 00:17:08.692 START TEST nvmf_tls 00:17:08.692 ************************************ 00:17:08.692 20:08:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:08.950 * Looking for test storage... 00:17:08.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:08.950 20:08:51 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:08.950 20:08:51 -- nvmf/common.sh@7 -- # uname -s 00:17:08.950 20:08:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.950 20:08:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.950 20:08:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.950 20:08:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.950 20:08:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.950 20:08:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.950 20:08:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.950 20:08:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.950 20:08:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.950 20:08:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.950 20:08:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:17:08.950 20:08:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:17:08.951 20:08:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.951 20:08:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.951 20:08:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:08.951 20:08:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.951 20:08:51 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:08.951 20:08:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.951 20:08:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.951 20:08:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.951 20:08:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.951 20:08:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.951 20:08:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.951 20:08:51 -- paths/export.sh@5 -- # export PATH 00:17:08.951 20:08:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.951 20:08:51 -- nvmf/common.sh@47 -- # : 0 00:17:08.951 20:08:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.951 20:08:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.951 20:08:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.951 20:08:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.951 20:08:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.951 20:08:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.951 20:08:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.951 20:08:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.951 20:08:51 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.951 20:08:51 -- target/tls.sh@62 -- # nvmftestinit 00:17:08.951 20:08:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:08.951 20:08:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.951 20:08:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:08.951 20:08:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:08.951 20:08:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:08.951 20:08:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.951 20:08:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.951 20:08:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.951 20:08:51 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:08.951 20:08:51 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:08.951 20:08:51 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:08.951 20:08:51 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:08.951 20:08:51 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:08.951 20:08:51 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:08.951 20:08:51 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.951 20:08:51 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.951 20:08:51 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:08.951 20:08:51 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:08.951 20:08:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:08.951 20:08:51 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:08.951 20:08:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:08.951 20:08:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.951 20:08:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:08.951 20:08:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:08.951 20:08:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:08.951 20:08:51 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:08.951 20:08:51 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:08.951 20:08:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:08.951 Cannot find device "nvmf_tgt_br" 00:17:08.951 20:08:51 -- nvmf/common.sh@155 -- # true 00:17:08.951 20:08:51 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:08.951 Cannot find device "nvmf_tgt_br2" 00:17:08.951 20:08:51 -- nvmf/common.sh@156 -- # true 00:17:08.951 20:08:51 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:08.951 20:08:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:08.951 Cannot find device "nvmf_tgt_br" 00:17:08.951 20:08:51 -- nvmf/common.sh@158 -- # true 00:17:08.951 20:08:51 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:08.951 Cannot find device "nvmf_tgt_br2" 00:17:08.951 20:08:51 -- nvmf/common.sh@159 -- # true 00:17:08.951 20:08:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:09.211 20:08:51 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:09.211 20:08:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:09.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:09.211 20:08:51 -- nvmf/common.sh@162 -- # true 00:17:09.211 20:08:51 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:09.211 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:09.211 20:08:51 -- nvmf/common.sh@163 -- # true 00:17:09.211 20:08:51 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:09.211 20:08:51 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:09.211 20:08:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:09.211 20:08:51 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:09.211 20:08:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:09.211 20:08:51 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:09.211 20:08:51 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:09.211 20:08:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:09.211 20:08:51 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:09.211 20:08:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:09.211 20:08:51 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:09.211 20:08:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:09.211 20:08:51 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:09.211 20:08:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:09.211 20:08:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:09.211 20:08:51 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:09.211 20:08:51 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:09.211 20:08:51 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:09.211 20:08:51 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:09.211 20:08:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:09.211 20:08:51 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:09.211 20:08:51 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:09.211 20:08:51 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:09.211 20:08:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:09.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:17:09.211 00:17:09.211 --- 10.0.0.2 ping statistics --- 00:17:09.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.211 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:17:09.211 20:08:51 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:09.211 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:09.211 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:17:09.211 00:17:09.211 --- 10.0.0.3 ping statistics --- 00:17:09.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.211 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:09.211 20:08:51 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:09.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:17:09.211 00:17:09.211 --- 10.0.0.1 ping statistics --- 00:17:09.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.211 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:09.211 20:08:51 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.211 20:08:51 -- nvmf/common.sh@422 -- # return 0 00:17:09.211 20:08:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:09.211 20:08:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.211 20:08:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:09.211 20:08:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:09.211 20:08:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.211 20:08:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:09.211 20:08:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:09.211 20:08:51 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:09.211 20:08:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:09.211 20:08:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:09.211 20:08:51 -- common/autotest_common.sh@10 -- # set +x 00:17:09.470 20:08:51 -- nvmf/common.sh@470 -- # nvmfpid=69737 00:17:09.470 20:08:51 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:09.470 20:08:51 -- nvmf/common.sh@471 -- # waitforlisten 69737 00:17:09.470 20:08:51 -- common/autotest_common.sh@817 -- # '[' -z 69737 ']' 00:17:09.470 20:08:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.470 20:08:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:09.471 20:08:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.471 20:08:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:09.471 20:08:51 -- common/autotest_common.sh@10 -- # set +x 00:17:09.471 [2024-04-24 20:08:51.522317] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:09.471 [2024-04-24 20:08:51.522399] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.471 [2024-04-24 20:08:51.665154] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.731 [2024-04-24 20:08:51.765028] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.731 [2024-04-24 20:08:51.765076] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.731 [2024-04-24 20:08:51.765098] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.731 [2024-04-24 20:08:51.765104] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.731 [2024-04-24 20:08:51.765108] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.731 [2024-04-24 20:08:51.765139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.300 20:08:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:10.300 20:08:52 -- common/autotest_common.sh@850 -- # return 0 00:17:10.300 20:08:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:10.300 20:08:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:10.300 20:08:52 -- common/autotest_common.sh@10 -- # set +x 00:17:10.300 20:08:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.300 20:08:52 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:10.300 20:08:52 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:10.561 true 00:17:10.561 20:08:52 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:10.561 20:08:52 -- target/tls.sh@73 -- # jq -r .tls_version 00:17:10.821 20:08:52 -- target/tls.sh@73 -- # version=0 00:17:10.821 20:08:52 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:10.821 20:08:52 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:11.081 20:08:53 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:11.081 20:08:53 -- target/tls.sh@81 -- # jq -r .tls_version 00:17:11.081 20:08:53 -- target/tls.sh@81 -- # version=13 00:17:11.081 20:08:53 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:11.081 20:08:53 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:11.340 20:08:53 -- target/tls.sh@89 -- # jq -r .tls_version 00:17:11.340 20:08:53 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:11.599 20:08:53 -- target/tls.sh@89 -- # version=7 00:17:11.599 20:08:53 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:11.599 20:08:53 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:11.599 20:08:53 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:11.859 20:08:53 -- target/tls.sh@96 -- # ktls=false 00:17:11.859 20:08:53 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:11.859 20:08:53 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:12.119 20:08:54 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:12.119 20:08:54 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:12.119 20:08:54 -- target/tls.sh@104 -- # ktls=true 00:17:12.119 20:08:54 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:12.119 20:08:54 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:12.377 20:08:54 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:12.377 20:08:54 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:12.669 20:08:54 -- target/tls.sh@112 -- # ktls=false 00:17:12.669 20:08:54 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:12.669 20:08:54 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:12.670 20:08:54 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:12.670 20:08:54 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:12.670 20:08:54 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:12.670 20:08:54 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:17:12.670 20:08:54 -- nvmf/common.sh@693 -- # digest=1 00:17:12.670 20:08:54 -- nvmf/common.sh@694 -- # python - 00:17:12.670 20:08:54 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:12.670 20:08:54 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:12.670 20:08:54 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:12.670 20:08:54 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:12.670 20:08:54 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:12.670 20:08:54 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:17:12.670 20:08:54 -- nvmf/common.sh@693 -- # digest=1 00:17:12.670 20:08:54 -- nvmf/common.sh@694 -- # python - 00:17:12.670 20:08:54 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:12.670 20:08:54 -- target/tls.sh@121 -- # mktemp 00:17:12.670 20:08:54 -- target/tls.sh@121 -- # key_path=/tmp/tmp.VwFKAdLc2p 00:17:12.670 20:08:54 -- target/tls.sh@122 -- # mktemp 00:17:12.670 20:08:54 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.2BJV3CYPe7 00:17:12.670 20:08:54 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:12.670 20:08:54 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:12.670 20:08:54 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.VwFKAdLc2p 00:17:12.670 20:08:54 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.2BJV3CYPe7 00:17:12.670 20:08:54 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:12.936 20:08:55 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:13.194 20:08:55 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.VwFKAdLc2p 00:17:13.194 20:08:55 -- target/tls.sh@49 -- # local key=/tmp/tmp.VwFKAdLc2p 00:17:13.194 20:08:55 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:13.453 [2024-04-24 20:08:55.546844] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.453 20:08:55 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:13.712 20:08:55 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:13.713 [2024-04-24 20:08:55.954197] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:13.713 [2024-04-24 20:08:55.954285] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:13.713 [2024-04-24 20:08:55.954472] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.973 20:08:55 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:13.973 malloc0 00:17:13.973 20:08:56 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:14.233 20:08:56 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VwFKAdLc2p 00:17:14.492 [2024-04-24 20:08:56.594399] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:14.492 20:08:56 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.VwFKAdLc2p 00:17:26.705 Initializing NVMe Controllers 00:17:26.705 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:26.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:26.705 Initialization complete. Launching workers. 00:17:26.705 ======================================================== 00:17:26.705 Latency(us) 00:17:26.705 Device Information : IOPS MiB/s Average min max 00:17:26.705 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13602.41 53.13 4705.77 936.26 16640.83 00:17:26.705 ======================================================== 00:17:26.705 Total : 13602.41 53.13 4705.77 936.26 16640.83 00:17:26.705 00:17:26.705 20:09:06 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VwFKAdLc2p 00:17:26.705 20:09:06 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:26.705 20:09:06 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:26.705 20:09:06 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:26.705 20:09:06 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.VwFKAdLc2p' 00:17:26.705 20:09:06 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:26.705 20:09:06 -- target/tls.sh@28 -- # bdevperf_pid=69958 00:17:26.705 20:09:06 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:26.705 20:09:06 -- target/tls.sh@31 -- # waitforlisten 69958 /var/tmp/bdevperf.sock 00:17:26.705 20:09:06 -- common/autotest_common.sh@817 -- # '[' -z 69958 ']' 00:17:26.705 20:09:06 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:26.705 20:09:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.705 20:09:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:26.705 20:09:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.705 20:09:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:26.705 20:09:06 -- common/autotest_common.sh@10 -- # set +x 00:17:26.705 [2024-04-24 20:09:06.822179] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:26.705 [2024-04-24 20:09:06.822283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69958 ] 00:17:26.705 [2024-04-24 20:09:06.982693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.705 [2024-04-24 20:09:07.117045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.705 20:09:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:26.705 20:09:07 -- common/autotest_common.sh@850 -- # return 0 00:17:26.705 20:09:07 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VwFKAdLc2p 00:17:26.705 [2024-04-24 20:09:08.031234] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:26.706 [2024-04-24 20:09:08.031360] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:26.706 TLSTESTn1 00:17:26.706 20:09:08 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:26.706 Running I/O for 10 seconds... 00:17:36.673 00:17:36.673 Latency(us) 00:17:36.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.673 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:36.673 Verification LBA range: start 0x0 length 0x2000 00:17:36.673 TLSTESTn1 : 10.01 4570.58 17.85 0.00 0.00 27959.33 5323.01 40294.62 00:17:36.673 =================================================================================================================== 00:17:36.673 Total : 4570.58 17.85 0.00 0.00 27959.33 5323.01 40294.62 00:17:36.673 0 00:17:36.673 20:09:18 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:36.673 20:09:18 -- target/tls.sh@45 -- # killprocess 69958 00:17:36.673 20:09:18 -- common/autotest_common.sh@936 -- # '[' -z 69958 ']' 00:17:36.674 20:09:18 -- common/autotest_common.sh@940 -- # kill -0 69958 00:17:36.674 20:09:18 -- common/autotest_common.sh@941 -- # uname 00:17:36.674 20:09:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:36.674 20:09:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69958 00:17:36.674 killing process with pid 69958 00:17:36.674 Received shutdown signal, test time was about 10.000000 seconds 00:17:36.674 00:17:36.674 Latency(us) 00:17:36.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.674 =================================================================================================================== 00:17:36.674 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.674 20:09:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:36.674 20:09:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:36.674 20:09:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69958' 00:17:36.674 20:09:18 -- common/autotest_common.sh@955 -- # kill 69958 00:17:36.674 [2024-04-24 20:09:18.269114] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:36.674 20:09:18 -- common/autotest_common.sh@960 -- # wait 69958 00:17:36.674 20:09:18 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2BJV3CYPe7 00:17:36.674 20:09:18 -- common/autotest_common.sh@638 -- # local es=0 00:17:36.674 20:09:18 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2BJV3CYPe7 00:17:36.674 20:09:18 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:36.674 20:09:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:36.674 20:09:18 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:36.674 20:09:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:36.674 20:09:18 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2BJV3CYPe7 00:17:36.674 20:09:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:36.674 20:09:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:36.674 20:09:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:36.674 20:09:18 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2BJV3CYPe7' 00:17:36.674 20:09:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:36.674 20:09:18 -- target/tls.sh@28 -- # bdevperf_pid=70097 00:17:36.674 20:09:18 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:36.674 20:09:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:36.674 20:09:18 -- target/tls.sh@31 -- # waitforlisten 70097 /var/tmp/bdevperf.sock 00:17:36.674 20:09:18 -- common/autotest_common.sh@817 -- # '[' -z 70097 ']' 00:17:36.674 20:09:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.674 20:09:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:36.674 20:09:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.674 20:09:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:36.674 20:09:18 -- common/autotest_common.sh@10 -- # set +x 00:17:36.674 [2024-04-24 20:09:18.553206] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:36.674 [2024-04-24 20:09:18.553393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70097 ] 00:17:36.674 [2024-04-24 20:09:18.692559] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.674 [2024-04-24 20:09:18.796953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.242 20:09:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:37.242 20:09:19 -- common/autotest_common.sh@850 -- # return 0 00:17:37.242 20:09:19 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2BJV3CYPe7 00:17:37.501 [2024-04-24 20:09:19.611540] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:37.501 [2024-04-24 20:09:19.611645] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:37.501 [2024-04-24 20:09:19.620121] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:37.501 [2024-04-24 20:09:19.620850] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2364a80 (107): Transport endpoint is not connected 00:17:37.501 [2024-04-24 20:09:19.621837] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2364a80 (9): Bad file descriptor 00:17:37.501 [2024-04-24 20:09:19.622833] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:37.501 [2024-04-24 20:09:19.622853] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:37.501 [2024-04-24 20:09:19.622863] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:37.501 request: 00:17:37.501 { 00:17:37.501 "name": "TLSTEST", 00:17:37.501 "trtype": "tcp", 00:17:37.501 "traddr": "10.0.0.2", 00:17:37.501 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.501 "adrfam": "ipv4", 00:17:37.501 "trsvcid": "4420", 00:17:37.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.501 "psk": "/tmp/tmp.2BJV3CYPe7", 00:17:37.501 "method": "bdev_nvme_attach_controller", 00:17:37.501 "req_id": 1 00:17:37.501 } 00:17:37.501 Got JSON-RPC error response 00:17:37.501 response: 00:17:37.501 { 00:17:37.501 "code": -32602, 00:17:37.501 "message": "Invalid parameters" 00:17:37.501 } 00:17:37.501 20:09:19 -- target/tls.sh@36 -- # killprocess 70097 00:17:37.501 20:09:19 -- common/autotest_common.sh@936 -- # '[' -z 70097 ']' 00:17:37.501 20:09:19 -- common/autotest_common.sh@940 -- # kill -0 70097 00:17:37.501 20:09:19 -- common/autotest_common.sh@941 -- # uname 00:17:37.501 20:09:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:37.501 20:09:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70097 00:17:37.501 killing process with pid 70097 00:17:37.501 Received shutdown signal, test time was about 10.000000 seconds 00:17:37.501 00:17:37.501 Latency(us) 00:17:37.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.501 =================================================================================================================== 00:17:37.501 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:37.501 20:09:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:37.501 20:09:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:37.501 20:09:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70097' 00:17:37.501 20:09:19 -- common/autotest_common.sh@955 -- # kill 70097 00:17:37.501 [2024-04-24 20:09:19.685500] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:37.501 20:09:19 -- common/autotest_common.sh@960 -- # wait 70097 00:17:37.759 20:09:19 -- target/tls.sh@37 -- # return 1 00:17:37.759 20:09:19 -- common/autotest_common.sh@641 -- # es=1 00:17:37.759 20:09:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:37.759 20:09:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:37.759 20:09:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:37.759 20:09:19 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.VwFKAdLc2p 00:17:37.759 20:09:19 -- common/autotest_common.sh@638 -- # local es=0 00:17:37.759 20:09:19 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.VwFKAdLc2p 00:17:37.759 20:09:19 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:37.759 20:09:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:37.759 20:09:19 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:37.759 20:09:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:37.759 20:09:19 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.VwFKAdLc2p 00:17:37.759 20:09:19 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:37.759 20:09:19 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:37.759 20:09:19 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:37.759 20:09:19 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.VwFKAdLc2p' 00:17:37.759 20:09:19 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:37.759 20:09:19 -- target/tls.sh@28 -- # bdevperf_pid=70119 00:17:37.759 20:09:19 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:37.759 20:09:19 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:37.759 20:09:19 -- target/tls.sh@31 -- # waitforlisten 70119 /var/tmp/bdevperf.sock 00:17:37.759 20:09:19 -- common/autotest_common.sh@817 -- # '[' -z 70119 ']' 00:17:37.759 20:09:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.759 20:09:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:37.759 20:09:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.759 20:09:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:37.759 20:09:19 -- common/autotest_common.sh@10 -- # set +x 00:17:37.759 [2024-04-24 20:09:19.961760] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:37.759 [2024-04-24 20:09:19.961845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70119 ] 00:17:38.018 [2024-04-24 20:09:20.101570] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.018 [2024-04-24 20:09:20.207155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.954 20:09:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:38.954 20:09:20 -- common/autotest_common.sh@850 -- # return 0 00:17:38.954 20:09:20 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.VwFKAdLc2p 00:17:38.954 [2024-04-24 20:09:21.122210] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.954 [2024-04-24 20:09:21.122318] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:38.954 [2024-04-24 20:09:21.127324] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:38.955 [2024-04-24 20:09:21.127359] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:38.955 [2024-04-24 20:09:21.127417] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:38.955 [2024-04-24 20:09:21.127759] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d4a80 (107): Transport endpoint is not connected 00:17:38.955 [2024-04-24 20:09:21.128743] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d4a80 (9): Bad file descriptor 00:17:38.955 [2024-04-24 20:09:21.129739] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:38.955 [2024-04-24 20:09:21.129753] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:38.955 [2024-04-24 20:09:21.129762] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:38.955 request: 00:17:38.955 { 00:17:38.955 "name": "TLSTEST", 00:17:38.955 "trtype": "tcp", 00:17:38.955 "traddr": "10.0.0.2", 00:17:38.955 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:38.955 "adrfam": "ipv4", 00:17:38.955 "trsvcid": "4420", 00:17:38.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.955 "psk": "/tmp/tmp.VwFKAdLc2p", 00:17:38.955 "method": "bdev_nvme_attach_controller", 00:17:38.955 "req_id": 1 00:17:38.955 } 00:17:38.955 Got JSON-RPC error response 00:17:38.955 response: 00:17:38.955 { 00:17:38.955 "code": -32602, 00:17:38.955 "message": "Invalid parameters" 00:17:38.955 } 00:17:38.955 20:09:21 -- target/tls.sh@36 -- # killprocess 70119 00:17:38.955 20:09:21 -- common/autotest_common.sh@936 -- # '[' -z 70119 ']' 00:17:38.955 20:09:21 -- common/autotest_common.sh@940 -- # kill -0 70119 00:17:38.955 20:09:21 -- common/autotest_common.sh@941 -- # uname 00:17:38.955 20:09:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.955 20:09:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70119 00:17:38.955 killing process with pid 70119 00:17:38.955 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.955 00:17:38.955 Latency(us) 00:17:38.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.955 =================================================================================================================== 00:17:38.955 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:38.955 20:09:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:38.955 20:09:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:38.955 20:09:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70119' 00:17:38.955 20:09:21 -- common/autotest_common.sh@955 -- # kill 70119 00:17:38.955 [2024-04-24 20:09:21.178792] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:38.955 20:09:21 -- common/autotest_common.sh@960 -- # wait 70119 00:17:39.214 20:09:21 -- target/tls.sh@37 -- # return 1 00:17:39.214 20:09:21 -- common/autotest_common.sh@641 -- # es=1 00:17:39.214 20:09:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:39.214 20:09:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:39.214 20:09:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:39.214 20:09:21 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.VwFKAdLc2p 00:17:39.215 20:09:21 -- common/autotest_common.sh@638 -- # local es=0 00:17:39.215 20:09:21 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.VwFKAdLc2p 00:17:39.215 20:09:21 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:39.215 20:09:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:39.215 20:09:21 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:39.215 20:09:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:39.215 20:09:21 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.VwFKAdLc2p 00:17:39.215 20:09:21 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:39.215 20:09:21 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:39.215 20:09:21 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:39.215 20:09:21 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.VwFKAdLc2p' 00:17:39.215 20:09:21 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:39.215 20:09:21 -- target/tls.sh@28 -- # bdevperf_pid=70152 00:17:39.215 20:09:21 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:39.215 20:09:21 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:39.215 20:09:21 -- target/tls.sh@31 -- # waitforlisten 70152 /var/tmp/bdevperf.sock 00:17:39.215 20:09:21 -- common/autotest_common.sh@817 -- # '[' -z 70152 ']' 00:17:39.215 20:09:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.215 20:09:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:39.215 20:09:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.215 20:09:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:39.215 20:09:21 -- common/autotest_common.sh@10 -- # set +x 00:17:39.215 [2024-04-24 20:09:21.453233] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:39.215 [2024-04-24 20:09:21.453310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70152 ] 00:17:39.474 [2024-04-24 20:09:21.578422] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.474 [2024-04-24 20:09:21.685604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.410 20:09:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:40.410 20:09:22 -- common/autotest_common.sh@850 -- # return 0 00:17:40.410 20:09:22 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VwFKAdLc2p 00:17:40.410 [2024-04-24 20:09:22.521548] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.410 [2024-04-24 20:09:22.521658] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:40.410 [2024-04-24 20:09:22.526345] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:40.410 [2024-04-24 20:09:22.526389] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:40.410 [2024-04-24 20:09:22.526437] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:40.410 [2024-04-24 20:09:22.527077] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x54fa80 (107): Transport endpoint is not connected 00:17:40.410 [2024-04-24 20:09:22.528062] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x54fa80 (9): Bad file descriptor 00:17:40.410 [2024-04-24 20:09:22.529058] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:40.410 [2024-04-24 20:09:22.529079] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:40.410 [2024-04-24 20:09:22.529088] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:40.410 request: 00:17:40.410 { 00:17:40.410 "name": "TLSTEST", 00:17:40.410 "trtype": "tcp", 00:17:40.410 "traddr": "10.0.0.2", 00:17:40.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.410 "adrfam": "ipv4", 00:17:40.410 "trsvcid": "4420", 00:17:40.410 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:40.410 "psk": "/tmp/tmp.VwFKAdLc2p", 00:17:40.410 "method": "bdev_nvme_attach_controller", 00:17:40.410 "req_id": 1 00:17:40.410 } 00:17:40.410 Got JSON-RPC error response 00:17:40.410 response: 00:17:40.410 { 00:17:40.410 "code": -32602, 00:17:40.410 "message": "Invalid parameters" 00:17:40.410 } 00:17:40.410 20:09:22 -- target/tls.sh@36 -- # killprocess 70152 00:17:40.410 20:09:22 -- common/autotest_common.sh@936 -- # '[' -z 70152 ']' 00:17:40.410 20:09:22 -- common/autotest_common.sh@940 -- # kill -0 70152 00:17:40.410 20:09:22 -- common/autotest_common.sh@941 -- # uname 00:17:40.410 20:09:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.410 20:09:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70152 00:17:40.410 killing process with pid 70152 00:17:40.410 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.410 00:17:40.410 Latency(us) 00:17:40.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.410 =================================================================================================================== 00:17:40.410 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:40.410 20:09:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:40.410 20:09:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:40.410 20:09:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70152' 00:17:40.410 20:09:22 -- common/autotest_common.sh@955 -- # kill 70152 00:17:40.410 [2024-04-24 20:09:22.583246] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:40.410 20:09:22 -- common/autotest_common.sh@960 -- # wait 70152 00:17:40.681 20:09:22 -- target/tls.sh@37 -- # return 1 00:17:40.681 20:09:22 -- common/autotest_common.sh@641 -- # es=1 00:17:40.681 20:09:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:40.681 20:09:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:40.681 20:09:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:40.681 20:09:22 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:40.681 20:09:22 -- common/autotest_common.sh@638 -- # local es=0 00:17:40.681 20:09:22 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:40.681 20:09:22 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:40.681 20:09:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:40.681 20:09:22 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:40.681 20:09:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:40.681 20:09:22 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:40.681 20:09:22 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:40.681 20:09:22 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:40.681 20:09:22 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:40.681 20:09:22 -- target/tls.sh@23 -- # psk= 00:17:40.681 20:09:22 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.681 20:09:22 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:40.681 20:09:22 -- target/tls.sh@28 -- # bdevperf_pid=70174 00:17:40.681 20:09:22 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:40.681 20:09:22 -- target/tls.sh@31 -- # waitforlisten 70174 /var/tmp/bdevperf.sock 00:17:40.681 20:09:22 -- common/autotest_common.sh@817 -- # '[' -z 70174 ']' 00:17:40.681 20:09:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.681 20:09:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:40.681 20:09:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.681 20:09:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:40.681 20:09:22 -- common/autotest_common.sh@10 -- # set +x 00:17:40.681 [2024-04-24 20:09:22.854388] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:40.681 [2024-04-24 20:09:22.854491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70174 ] 00:17:40.938 [2024-04-24 20:09:22.990432] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.938 [2024-04-24 20:09:23.097284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.874 20:09:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:41.874 20:09:23 -- common/autotest_common.sh@850 -- # return 0 00:17:41.874 20:09:23 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:41.874 [2024-04-24 20:09:23.972254] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:41.874 [2024-04-24 20:09:23.973720] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88edc0 (9): Bad file descriptor 00:17:41.874 [2024-04-24 20:09:23.974713] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:41.874 [2024-04-24 20:09:23.974731] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:41.874 [2024-04-24 20:09:23.974741] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:41.874 request: 00:17:41.874 { 00:17:41.874 "name": "TLSTEST", 00:17:41.874 "trtype": "tcp", 00:17:41.874 "traddr": "10.0.0.2", 00:17:41.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:41.874 "adrfam": "ipv4", 00:17:41.874 "trsvcid": "4420", 00:17:41.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.874 "method": "bdev_nvme_attach_controller", 00:17:41.874 "req_id": 1 00:17:41.874 } 00:17:41.874 Got JSON-RPC error response 00:17:41.874 response: 00:17:41.874 { 00:17:41.874 "code": -32602, 00:17:41.874 "message": "Invalid parameters" 00:17:41.874 } 00:17:41.874 20:09:23 -- target/tls.sh@36 -- # killprocess 70174 00:17:41.874 20:09:23 -- common/autotest_common.sh@936 -- # '[' -z 70174 ']' 00:17:41.874 20:09:23 -- common/autotest_common.sh@940 -- # kill -0 70174 00:17:41.874 20:09:23 -- common/autotest_common.sh@941 -- # uname 00:17:41.874 20:09:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.874 20:09:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70174 00:17:41.874 20:09:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:41.874 20:09:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:41.874 killing process with pid 70174 00:17:41.874 20:09:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70174' 00:17:41.874 20:09:24 -- common/autotest_common.sh@955 -- # kill 70174 00:17:41.874 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.874 00:17:41.874 Latency(us) 00:17:41.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.874 =================================================================================================================== 00:17:41.874 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:41.874 20:09:24 -- common/autotest_common.sh@960 -- # wait 70174 00:17:42.134 20:09:24 -- target/tls.sh@37 -- # return 1 00:17:42.134 20:09:24 -- common/autotest_common.sh@641 -- # es=1 00:17:42.134 20:09:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:42.134 20:09:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:42.134 20:09:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:42.134 20:09:24 -- target/tls.sh@158 -- # killprocess 69737 00:17:42.134 20:09:24 -- common/autotest_common.sh@936 -- # '[' -z 69737 ']' 00:17:42.134 20:09:24 -- common/autotest_common.sh@940 -- # kill -0 69737 00:17:42.134 20:09:24 -- common/autotest_common.sh@941 -- # uname 00:17:42.134 20:09:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:42.134 20:09:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69737 00:17:42.134 20:09:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:42.134 20:09:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:42.134 killing process with pid 69737 00:17:42.134 20:09:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69737' 00:17:42.134 20:09:24 -- common/autotest_common.sh@955 -- # kill 69737 00:17:42.134 [2024-04-24 20:09:24.270817] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:42.134 [2024-04-24 20:09:24.270850] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:42.134 20:09:24 -- common/autotest_common.sh@960 -- # wait 69737 00:17:42.394 20:09:24 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:42.394 20:09:24 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:42.394 20:09:24 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:42.394 20:09:24 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:42.394 20:09:24 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:42.394 20:09:24 -- nvmf/common.sh@693 -- # digest=2 00:17:42.394 20:09:24 -- nvmf/common.sh@694 -- # python - 00:17:42.394 20:09:24 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:42.394 20:09:24 -- target/tls.sh@160 -- # mktemp 00:17:42.394 20:09:24 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.iZTqSWOR1x 00:17:42.394 20:09:24 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:42.394 20:09:24 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.iZTqSWOR1x 00:17:42.394 20:09:24 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:42.394 20:09:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:42.394 20:09:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:42.394 20:09:24 -- common/autotest_common.sh@10 -- # set +x 00:17:42.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.394 20:09:24 -- nvmf/common.sh@470 -- # nvmfpid=70217 00:17:42.394 20:09:24 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:42.394 20:09:24 -- nvmf/common.sh@471 -- # waitforlisten 70217 00:17:42.394 20:09:24 -- common/autotest_common.sh@817 -- # '[' -z 70217 ']' 00:17:42.394 20:09:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.394 20:09:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:42.394 20:09:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.394 20:09:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:42.394 20:09:24 -- common/autotest_common.sh@10 -- # set +x 00:17:42.394 [2024-04-24 20:09:24.617564] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:42.394 [2024-04-24 20:09:24.617669] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.653 [2024-04-24 20:09:24.761168] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.653 [2024-04-24 20:09:24.865194] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.653 [2024-04-24 20:09:24.865256] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.653 [2024-04-24 20:09:24.865263] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.653 [2024-04-24 20:09:24.865269] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.653 [2024-04-24 20:09:24.865274] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.653 [2024-04-24 20:09:24.865306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.591 20:09:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:43.591 20:09:25 -- common/autotest_common.sh@850 -- # return 0 00:17:43.591 20:09:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:43.591 20:09:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:43.591 20:09:25 -- common/autotest_common.sh@10 -- # set +x 00:17:43.591 20:09:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.591 20:09:25 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.iZTqSWOR1x 00:17:43.591 20:09:25 -- target/tls.sh@49 -- # local key=/tmp/tmp.iZTqSWOR1x 00:17:43.591 20:09:25 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:43.591 [2024-04-24 20:09:25.759759] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.591 20:09:25 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:43.851 20:09:26 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:44.110 [2024-04-24 20:09:26.234887] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:44.110 [2024-04-24 20:09:26.234971] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:44.110 [2024-04-24 20:09:26.235137] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.110 20:09:26 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:44.370 malloc0 00:17:44.370 20:09:26 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:44.629 20:09:26 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iZTqSWOR1x 00:17:44.629 [2024-04-24 20:09:26.878887] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:44.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.887 20:09:26 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iZTqSWOR1x 00:17:44.887 20:09:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:44.887 20:09:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:44.887 20:09:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:44.887 20:09:26 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iZTqSWOR1x' 00:17:44.887 20:09:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:44.887 20:09:26 -- target/tls.sh@28 -- # bdevperf_pid=70266 00:17:44.887 20:09:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:44.887 20:09:26 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:44.887 20:09:26 -- target/tls.sh@31 -- # waitforlisten 70266 /var/tmp/bdevperf.sock 00:17:44.887 20:09:26 -- common/autotest_common.sh@817 -- # '[' -z 70266 ']' 00:17:44.887 20:09:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.887 20:09:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:44.887 20:09:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.887 20:09:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:44.887 20:09:26 -- common/autotest_common.sh@10 -- # set +x 00:17:44.887 [2024-04-24 20:09:26.946401] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:44.887 [2024-04-24 20:09:26.946609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70266 ] 00:17:44.887 [2024-04-24 20:09:27.090661] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.146 [2024-04-24 20:09:27.209299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.713 20:09:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:45.713 20:09:27 -- common/autotest_common.sh@850 -- # return 0 00:17:45.713 20:09:27 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iZTqSWOR1x 00:17:45.971 [2024-04-24 20:09:27.999602] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:45.971 [2024-04-24 20:09:28.000158] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:45.971 TLSTESTn1 00:17:45.971 20:09:28 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:45.971 Running I/O for 10 seconds... 00:17:55.952 00:17:55.952 Latency(us) 00:17:55.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.952 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:55.952 Verification LBA range: start 0x0 length 0x2000 00:17:55.952 TLSTESTn1 : 10.01 5170.59 20.20 0.00 0.00 24712.32 5695.05 41210.41 00:17:55.952 =================================================================================================================== 00:17:55.952 Total : 5170.59 20.20 0.00 0.00 24712.32 5695.05 41210.41 00:17:55.952 0 00:17:56.211 20:09:38 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:56.211 20:09:38 -- target/tls.sh@45 -- # killprocess 70266 00:17:56.211 20:09:38 -- common/autotest_common.sh@936 -- # '[' -z 70266 ']' 00:17:56.211 20:09:38 -- common/autotest_common.sh@940 -- # kill -0 70266 00:17:56.211 20:09:38 -- common/autotest_common.sh@941 -- # uname 00:17:56.211 20:09:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:56.211 20:09:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70266 00:17:56.211 20:09:38 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:56.211 20:09:38 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:56.211 20:09:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70266' 00:17:56.211 killing process with pid 70266 00:17:56.211 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.211 00:17:56.211 Latency(us) 00:17:56.211 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.211 =================================================================================================================== 00:17:56.211 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:56.211 20:09:38 -- common/autotest_common.sh@955 -- # kill 70266 00:17:56.211 [2024-04-24 20:09:38.251656] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:56.211 20:09:38 -- common/autotest_common.sh@960 -- # wait 70266 00:17:56.470 20:09:38 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.iZTqSWOR1x 00:17:56.470 20:09:38 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iZTqSWOR1x 00:17:56.471 20:09:38 -- common/autotest_common.sh@638 -- # local es=0 00:17:56.471 20:09:38 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iZTqSWOR1x 00:17:56.471 20:09:38 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:17:56.471 20:09:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:56.471 20:09:38 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:17:56.471 20:09:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:56.471 20:09:38 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iZTqSWOR1x 00:17:56.471 20:09:38 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:56.471 20:09:38 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:56.471 20:09:38 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:56.471 20:09:38 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iZTqSWOR1x' 00:17:56.471 20:09:38 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.471 20:09:38 -- target/tls.sh@28 -- # bdevperf_pid=70397 00:17:56.471 20:09:38 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.471 20:09:38 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.471 20:09:38 -- target/tls.sh@31 -- # waitforlisten 70397 /var/tmp/bdevperf.sock 00:17:56.471 20:09:38 -- common/autotest_common.sh@817 -- # '[' -z 70397 ']' 00:17:56.471 20:09:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.471 20:09:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:56.471 20:09:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.471 20:09:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:56.471 20:09:38 -- common/autotest_common.sh@10 -- # set +x 00:17:56.471 [2024-04-24 20:09:38.555561] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:56.471 [2024-04-24 20:09:38.555790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70397 ] 00:17:56.471 [2024-04-24 20:09:38.702247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.729 [2024-04-24 20:09:38.808990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.296 20:09:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:57.296 20:09:39 -- common/autotest_common.sh@850 -- # return 0 00:17:57.296 20:09:39 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iZTqSWOR1x 00:17:57.555 [2024-04-24 20:09:39.643971] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.555 [2024-04-24 20:09:39.644589] bdev_nvme.c:6054:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:57.555 [2024-04-24 20:09:39.644729] bdev_nvme.c:6163:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.iZTqSWOR1x 00:17:57.555 request: 00:17:57.555 { 00:17:57.555 "name": "TLSTEST", 00:17:57.555 "trtype": "tcp", 00:17:57.555 "traddr": "10.0.0.2", 00:17:57.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.555 "adrfam": "ipv4", 00:17:57.555 "trsvcid": "4420", 00:17:57.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.555 "psk": "/tmp/tmp.iZTqSWOR1x", 00:17:57.555 "method": "bdev_nvme_attach_controller", 00:17:57.555 "req_id": 1 00:17:57.555 } 00:17:57.555 Got JSON-RPC error response 00:17:57.555 response: 00:17:57.555 { 00:17:57.555 "code": -1, 00:17:57.555 "message": "Operation not permitted" 00:17:57.555 } 00:17:57.555 20:09:39 -- target/tls.sh@36 -- # killprocess 70397 00:17:57.555 20:09:39 -- common/autotest_common.sh@936 -- # '[' -z 70397 ']' 00:17:57.555 20:09:39 -- common/autotest_common.sh@940 -- # kill -0 70397 00:17:57.555 20:09:39 -- common/autotest_common.sh@941 -- # uname 00:17:57.555 20:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.555 20:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70397 00:17:57.555 20:09:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:57.555 killing process with pid 70397 00:17:57.555 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.555 00:17:57.555 Latency(us) 00:17:57.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.555 =================================================================================================================== 00:17:57.555 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.555 20:09:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:57.555 20:09:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70397' 00:17:57.555 20:09:39 -- common/autotest_common.sh@955 -- # kill 70397 00:17:57.555 20:09:39 -- common/autotest_common.sh@960 -- # wait 70397 00:17:57.814 20:09:39 -- target/tls.sh@37 -- # return 1 00:17:57.814 20:09:39 -- common/autotest_common.sh@641 -- # es=1 00:17:57.814 20:09:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:57.814 20:09:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:57.814 20:09:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:57.814 20:09:39 -- target/tls.sh@174 -- # killprocess 70217 00:17:57.814 20:09:39 -- common/autotest_common.sh@936 -- # '[' -z 70217 ']' 00:17:57.814 20:09:39 -- common/autotest_common.sh@940 -- # kill -0 70217 00:17:57.814 20:09:39 -- common/autotest_common.sh@941 -- # uname 00:17:57.814 20:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.814 20:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70217 00:17:57.814 killing process with pid 70217 00:17:57.814 20:09:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:57.814 20:09:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:57.814 20:09:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70217' 00:17:57.814 20:09:39 -- common/autotest_common.sh@955 -- # kill 70217 00:17:57.814 [2024-04-24 20:09:39.937042] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:57.814 [2024-04-24 20:09:39.937095] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:57.814 20:09:39 -- common/autotest_common.sh@960 -- # wait 70217 00:17:58.101 20:09:40 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:58.101 20:09:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:58.101 20:09:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:58.101 20:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:58.101 20:09:40 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:58.101 20:09:40 -- nvmf/common.sh@470 -- # nvmfpid=70435 00:17:58.101 20:09:40 -- nvmf/common.sh@471 -- # waitforlisten 70435 00:17:58.101 20:09:40 -- common/autotest_common.sh@817 -- # '[' -z 70435 ']' 00:17:58.101 20:09:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.101 20:09:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:58.101 20:09:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.101 20:09:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:58.101 20:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:58.101 [2024-04-24 20:09:40.238986] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:17:58.101 [2024-04-24 20:09:40.239730] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.387 [2024-04-24 20:09:40.381429] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.387 [2024-04-24 20:09:40.482082] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.387 [2024-04-24 20:09:40.482146] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.387 [2024-04-24 20:09:40.482153] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.387 [2024-04-24 20:09:40.482159] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.387 [2024-04-24 20:09:40.482163] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.387 [2024-04-24 20:09:40.482185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.955 20:09:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:58.956 20:09:41 -- common/autotest_common.sh@850 -- # return 0 00:17:58.956 20:09:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:58.956 20:09:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:58.956 20:09:41 -- common/autotest_common.sh@10 -- # set +x 00:17:58.956 20:09:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.956 20:09:41 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.iZTqSWOR1x 00:17:58.956 20:09:41 -- common/autotest_common.sh@638 -- # local es=0 00:17:58.956 20:09:41 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.iZTqSWOR1x 00:17:58.956 20:09:41 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:17:58.956 20:09:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:58.956 20:09:41 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:17:58.956 20:09:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:58.956 20:09:41 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.iZTqSWOR1x 00:17:58.956 20:09:41 -- target/tls.sh@49 -- # local key=/tmp/tmp.iZTqSWOR1x 00:17:58.956 20:09:41 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:59.215 [2024-04-24 20:09:41.374404] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.215 20:09:41 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:59.475 20:09:41 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:59.735 [2024-04-24 20:09:41.769697] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:59.735 [2024-04-24 20:09:41.769782] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:59.735 [2024-04-24 20:09:41.769936] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.735 20:09:41 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:59.995 malloc0 00:17:59.995 20:09:42 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:59.995 20:09:42 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iZTqSWOR1x 00:18:00.338 [2024-04-24 20:09:42.385435] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:00.338 [2024-04-24 20:09:42.385471] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:00.338 [2024-04-24 20:09:42.385492] subsystem.c: 967:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:18:00.338 request: 00:18:00.338 { 00:18:00.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.338 "host": "nqn.2016-06.io.spdk:host1", 00:18:00.338 "psk": "/tmp/tmp.iZTqSWOR1x", 00:18:00.338 "method": "nvmf_subsystem_add_host", 00:18:00.338 "req_id": 1 00:18:00.338 } 00:18:00.338 Got JSON-RPC error response 00:18:00.338 response: 00:18:00.338 { 00:18:00.338 "code": -32603, 00:18:00.338 "message": "Internal error" 00:18:00.338 } 00:18:00.338 20:09:42 -- common/autotest_common.sh@641 -- # es=1 00:18:00.338 20:09:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:00.338 20:09:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:00.338 20:09:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:00.338 20:09:42 -- target/tls.sh@180 -- # killprocess 70435 00:18:00.338 20:09:42 -- common/autotest_common.sh@936 -- # '[' -z 70435 ']' 00:18:00.338 20:09:42 -- common/autotest_common.sh@940 -- # kill -0 70435 00:18:00.338 20:09:42 -- common/autotest_common.sh@941 -- # uname 00:18:00.338 20:09:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.338 20:09:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70435 00:18:00.338 killing process with pid 70435 00:18:00.338 20:09:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:00.338 20:09:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:00.338 20:09:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70435' 00:18:00.338 20:09:42 -- common/autotest_common.sh@955 -- # kill 70435 00:18:00.338 [2024-04-24 20:09:42.441864] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:00.338 20:09:42 -- common/autotest_common.sh@960 -- # wait 70435 00:18:00.605 20:09:42 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.iZTqSWOR1x 00:18:00.605 20:09:42 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:00.605 20:09:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:00.605 20:09:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:00.605 20:09:42 -- common/autotest_common.sh@10 -- # set +x 00:18:00.606 20:09:42 -- nvmf/common.sh@470 -- # nvmfpid=70492 00:18:00.606 20:09:42 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:00.606 20:09:42 -- nvmf/common.sh@471 -- # waitforlisten 70492 00:18:00.606 20:09:42 -- common/autotest_common.sh@817 -- # '[' -z 70492 ']' 00:18:00.606 20:09:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.606 20:09:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:00.606 20:09:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.606 20:09:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:00.606 20:09:42 -- common/autotest_common.sh@10 -- # set +x 00:18:00.606 [2024-04-24 20:09:42.747113] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:00.606 [2024-04-24 20:09:42.747185] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.865 [2024-04-24 20:09:42.885628] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.865 [2024-04-24 20:09:42.989711] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.865 [2024-04-24 20:09:42.989761] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.865 [2024-04-24 20:09:42.989769] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.865 [2024-04-24 20:09:42.989774] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.865 [2024-04-24 20:09:42.989779] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.865 [2024-04-24 20:09:42.989801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.432 20:09:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:01.432 20:09:43 -- common/autotest_common.sh@850 -- # return 0 00:18:01.432 20:09:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:01.432 20:09:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:01.432 20:09:43 -- common/autotest_common.sh@10 -- # set +x 00:18:01.432 20:09:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.432 20:09:43 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.iZTqSWOR1x 00:18:01.432 20:09:43 -- target/tls.sh@49 -- # local key=/tmp/tmp.iZTqSWOR1x 00:18:01.432 20:09:43 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:01.692 [2024-04-24 20:09:43.840416] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.692 20:09:43 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:01.951 20:09:44 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:02.209 [2024-04-24 20:09:44.235718] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:02.209 [2024-04-24 20:09:44.235814] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:02.209 [2024-04-24 20:09:44.235990] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.209 20:09:44 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:02.468 malloc0 00:18:02.468 20:09:44 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:02.727 20:09:44 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iZTqSWOR1x 00:18:02.727 [2024-04-24 20:09:44.907980] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:02.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.727 20:09:44 -- target/tls.sh@188 -- # bdevperf_pid=70547 00:18:02.727 20:09:44 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.727 20:09:44 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.727 20:09:44 -- target/tls.sh@191 -- # waitforlisten 70547 /var/tmp/bdevperf.sock 00:18:02.727 20:09:44 -- common/autotest_common.sh@817 -- # '[' -z 70547 ']' 00:18:02.727 20:09:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.727 20:09:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:02.727 20:09:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.727 20:09:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:02.727 20:09:44 -- common/autotest_common.sh@10 -- # set +x 00:18:02.727 [2024-04-24 20:09:44.974234] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:02.727 [2024-04-24 20:09:44.974317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70547 ] 00:18:02.987 [2024-04-24 20:09:45.115825] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.987 [2024-04-24 20:09:45.222995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.925 20:09:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:03.926 20:09:45 -- common/autotest_common.sh@850 -- # return 0 00:18:03.926 20:09:45 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iZTqSWOR1x 00:18:03.926 [2024-04-24 20:09:46.041795] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.926 [2024-04-24 20:09:46.042020] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:03.926 TLSTESTn1 00:18:03.926 20:09:46 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:04.494 20:09:46 -- target/tls.sh@196 -- # tgtconf='{ 00:18:04.495 "subsystems": [ 00:18:04.495 { 00:18:04.495 "subsystem": "keyring", 00:18:04.495 "config": [] 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "subsystem": "iobuf", 00:18:04.495 "config": [ 00:18:04.495 { 00:18:04.495 "method": "iobuf_set_options", 00:18:04.495 "params": { 00:18:04.495 "small_pool_count": 8192, 00:18:04.495 "large_pool_count": 1024, 00:18:04.495 "small_bufsize": 8192, 00:18:04.495 "large_bufsize": 135168 00:18:04.495 } 00:18:04.495 } 00:18:04.495 ] 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "subsystem": "sock", 00:18:04.495 "config": [ 00:18:04.495 { 00:18:04.495 "method": "sock_impl_set_options", 00:18:04.495 "params": { 00:18:04.495 "impl_name": "uring", 00:18:04.495 "recv_buf_size": 2097152, 00:18:04.495 "send_buf_size": 2097152, 00:18:04.495 "enable_recv_pipe": true, 00:18:04.495 "enable_quickack": false, 00:18:04.495 "enable_placement_id": 0, 00:18:04.495 "enable_zerocopy_send_server": false, 00:18:04.495 "enable_zerocopy_send_client": false, 00:18:04.495 "zerocopy_threshold": 0, 00:18:04.495 "tls_version": 0, 00:18:04.495 "enable_ktls": false 00:18:04.495 } 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "method": "sock_impl_set_options", 00:18:04.495 "params": { 00:18:04.495 "impl_name": "posix", 00:18:04.495 "recv_buf_size": 2097152, 00:18:04.495 "send_buf_size": 2097152, 00:18:04.495 "enable_recv_pipe": true, 00:18:04.495 "enable_quickack": false, 00:18:04.495 "enable_placement_id": 0, 00:18:04.495 "enable_zerocopy_send_server": true, 00:18:04.495 "enable_zerocopy_send_client": false, 00:18:04.495 "zerocopy_threshold": 0, 00:18:04.495 "tls_version": 0, 00:18:04.495 "enable_ktls": false 00:18:04.495 } 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "method": "sock_impl_set_options", 00:18:04.495 "params": { 00:18:04.495 "impl_name": "ssl", 00:18:04.495 "recv_buf_size": 4096, 00:18:04.495 "send_buf_size": 4096, 00:18:04.495 "enable_recv_pipe": true, 00:18:04.495 "enable_quickack": false, 00:18:04.495 "enable_placement_id": 0, 00:18:04.495 "enable_zerocopy_send_server": true, 00:18:04.495 "enable_zerocopy_send_client": false, 00:18:04.495 "zerocopy_threshold": 0, 00:18:04.495 "tls_version": 0, 00:18:04.495 "enable_ktls": false 00:18:04.495 } 00:18:04.495 } 00:18:04.495 ] 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "subsystem": "vmd", 00:18:04.495 "config": [] 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "subsystem": "accel", 00:18:04.495 "config": [ 00:18:04.495 { 00:18:04.495 "method": "accel_set_options", 00:18:04.495 "params": { 00:18:04.495 "small_cache_size": 128, 00:18:04.495 "large_cache_size": 16, 00:18:04.495 "task_count": 2048, 00:18:04.495 "sequence_count": 2048, 00:18:04.495 "buf_count": 2048 00:18:04.495 } 00:18:04.495 } 00:18:04.495 ] 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "subsystem": "bdev", 00:18:04.495 "config": [ 00:18:04.495 { 00:18:04.495 "method": "bdev_set_options", 00:18:04.495 "params": { 00:18:04.495 "bdev_io_pool_size": 65535, 00:18:04.495 "bdev_io_cache_size": 256, 00:18:04.495 "bdev_auto_examine": true, 00:18:04.495 "iobuf_small_cache_size": 128, 00:18:04.495 "iobuf_large_cache_size": 16 00:18:04.495 } 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "method": "bdev_raid_set_options", 00:18:04.495 "params": { 00:18:04.495 "process_window_size_kb": 1024 00:18:04.495 } 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "method": "bdev_iscsi_set_options", 00:18:04.495 "params": { 00:18:04.495 "timeout_sec": 30 00:18:04.495 } 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "method": "bdev_nvme_set_options", 00:18:04.495 "params": { 00:18:04.495 "action_on_timeout": "none", 00:18:04.495 "timeout_us": 0, 00:18:04.495 "timeout_admin_us": 0, 00:18:04.495 "keep_alive_timeout_ms": 10000, 00:18:04.495 "arbitration_burst": 0, 00:18:04.495 "low_priority_weight": 0, 00:18:04.495 "medium_priority_weight": 0, 00:18:04.495 "high_priority_weight": 0, 00:18:04.495 "nvme_adminq_poll_period_us": 10000, 00:18:04.495 "nvme_ioq_poll_period_us": 0, 00:18:04.495 "io_queue_requests": 0, 00:18:04.495 "delay_cmd_submit": true, 00:18:04.495 "transport_retry_count": 4, 00:18:04.495 "bdev_retry_count": 3, 00:18:04.495 "transport_ack_timeout": 0, 00:18:04.495 "ctrlr_loss_timeout_sec": 0, 00:18:04.495 "reconnect_delay_sec": 0, 00:18:04.495 "fast_io_fail_timeout_sec": 0, 00:18:04.495 "disable_auto_failback": false, 00:18:04.495 "generate_uuids": false, 00:18:04.495 "transport_tos": 0, 00:18:04.495 "nvme_error_stat": false, 00:18:04.495 "rdma_srq_size": 0, 00:18:04.495 "io_path_stat": false, 00:18:04.495 "allow_accel_sequence": false, 00:18:04.495 "rdma_max_cq_size": 0, 00:18:04.495 "rdma_cm_event_timeout_ms": 0, 00:18:04.495 "dhchap_digests": [ 00:18:04.495 "sha256", 00:18:04.495 "sha384", 00:18:04.495 "sha512" 00:18:04.495 ], 00:18:04.495 "dhchap_dhgroups": [ 00:18:04.495 "null", 00:18:04.495 "ffdhe2048", 00:18:04.495 "ffdhe3072", 00:18:04.495 "ffdhe4096", 00:18:04.495 "ffdhe6144", 00:18:04.495 "ffdhe8192" 00:18:04.495 ] 00:18:04.495 } 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "method": "bdev_nvme_set_hotplug", 00:18:04.495 "params": { 00:18:04.495 "period_us": 100000, 00:18:04.495 "enable": false 00:18:04.495 } 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "method": "bdev_malloc_create", 00:18:04.495 "params": { 00:18:04.495 "name": "malloc0", 00:18:04.495 "num_blocks": 8192, 00:18:04.495 "block_size": 4096, 00:18:04.495 "physical_block_size": 4096, 00:18:04.495 "uuid": "c1b7ed2a-c582-4c69-bcc0-925a80ea8847", 00:18:04.495 "optimal_io_boundary": 0 00:18:04.495 } 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "method": "bdev_wait_for_examine" 00:18:04.495 } 00:18:04.495 ] 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "subsystem": "nbd", 00:18:04.495 "config": [] 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "subsystem": "scheduler", 00:18:04.495 "config": [ 00:18:04.495 { 00:18:04.495 "method": "framework_set_scheduler", 00:18:04.495 "params": { 00:18:04.495 "name": "static" 00:18:04.495 } 00:18:04.495 } 00:18:04.495 ] 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "subsystem": "nvmf", 00:18:04.495 "config": [ 00:18:04.495 { 00:18:04.495 "method": "nvmf_set_config", 00:18:04.495 "params": { 00:18:04.495 "discovery_filter": "match_any", 00:18:04.495 "admin_cmd_passthru": { 00:18:04.495 "identify_ctrlr": false 00:18:04.495 } 00:18:04.495 } 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "method": "nvmf_set_max_subsystems", 00:18:04.495 "params": { 00:18:04.495 "max_subsystems": 1024 00:18:04.495 } 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "method": "nvmf_set_crdt", 00:18:04.495 "params": { 00:18:04.495 "crdt1": 0, 00:18:04.495 "crdt2": 0, 00:18:04.495 "crdt3": 0 00:18:04.495 } 00:18:04.495 }, 00:18:04.495 { 00:18:04.495 "method": "nvmf_create_transport", 00:18:04.495 "params": { 00:18:04.495 "trtype": "TCP", 00:18:04.495 "max_queue_depth": 128, 00:18:04.495 "max_io_qpairs_per_ctrlr": 127, 00:18:04.495 "in_capsule_data_size": 4096, 00:18:04.495 "max_io_size": 131072, 00:18:04.495 "io_unit_size": 131072, 00:18:04.495 "max_aq_depth": 128, 00:18:04.495 "num_shared_buffers": 511, 00:18:04.495 "buf_cache_size": 4294967295, 00:18:04.495 "dif_insert_or_strip": false, 00:18:04.495 "zcopy": false, 00:18:04.495 "c2h_success": false, 00:18:04.495 "sock_priority": 0, 00:18:04.495 "abort_timeout_sec": 1, 00:18:04.495 "ack_timeout": 0 00:18:04.495 } 00:18:04.495 }, 00:18:04.496 { 00:18:04.496 "method": "nvmf_create_subsystem", 00:18:04.496 "params": { 00:18:04.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.496 "allow_any_host": false, 00:18:04.496 "serial_number": "SPDK00000000000001", 00:18:04.496 "model_number": "SPDK bdev Controller", 00:18:04.496 "max_namespaces": 10, 00:18:04.496 "min_cntlid": 1, 00:18:04.496 "max_cntlid": 65519, 00:18:04.496 "ana_reporting": false 00:18:04.496 } 00:18:04.496 }, 00:18:04.496 { 00:18:04.496 "method": "nvmf_subsystem_add_host", 00:18:04.496 "params": { 00:18:04.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.496 "host": "nqn.2016-06.io.spdk:host1", 00:18:04.496 "psk": "/tmp/tmp.iZTqSWOR1x" 00:18:04.496 } 00:18:04.496 }, 00:18:04.496 { 00:18:04.496 "method": "nvmf_subsystem_add_ns", 00:18:04.496 "params": { 00:18:04.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.496 "namespace": { 00:18:04.496 "nsid": 1, 00:18:04.496 "bdev_name": "malloc0", 00:18:04.496 "nguid": "C1B7ED2AC5824C69BCC0925A80EA8847", 00:18:04.496 "uuid": "c1b7ed2a-c582-4c69-bcc0-925a80ea8847", 00:18:04.496 "no_auto_visible": false 00:18:04.496 } 00:18:04.496 } 00:18:04.496 }, 00:18:04.496 { 00:18:04.496 "method": "nvmf_subsystem_add_listener", 00:18:04.496 "params": { 00:18:04.496 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.496 "listen_address": { 00:18:04.496 "trtype": "TCP", 00:18:04.496 "adrfam": "IPv4", 00:18:04.496 "traddr": "10.0.0.2", 00:18:04.496 "trsvcid": "4420" 00:18:04.496 }, 00:18:04.496 "secure_channel": true 00:18:04.496 } 00:18:04.496 } 00:18:04.496 ] 00:18:04.496 } 00:18:04.496 ] 00:18:04.496 }' 00:18:04.496 20:09:46 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:04.757 20:09:46 -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:04.757 "subsystems": [ 00:18:04.757 { 00:18:04.757 "subsystem": "keyring", 00:18:04.757 "config": [] 00:18:04.757 }, 00:18:04.757 { 00:18:04.757 "subsystem": "iobuf", 00:18:04.757 "config": [ 00:18:04.757 { 00:18:04.757 "method": "iobuf_set_options", 00:18:04.757 "params": { 00:18:04.757 "small_pool_count": 8192, 00:18:04.757 "large_pool_count": 1024, 00:18:04.757 "small_bufsize": 8192, 00:18:04.757 "large_bufsize": 135168 00:18:04.757 } 00:18:04.757 } 00:18:04.757 ] 00:18:04.757 }, 00:18:04.757 { 00:18:04.757 "subsystem": "sock", 00:18:04.757 "config": [ 00:18:04.757 { 00:18:04.757 "method": "sock_impl_set_options", 00:18:04.757 "params": { 00:18:04.757 "impl_name": "uring", 00:18:04.757 "recv_buf_size": 2097152, 00:18:04.757 "send_buf_size": 2097152, 00:18:04.757 "enable_recv_pipe": true, 00:18:04.757 "enable_quickack": false, 00:18:04.757 "enable_placement_id": 0, 00:18:04.757 "enable_zerocopy_send_server": false, 00:18:04.757 "enable_zerocopy_send_client": false, 00:18:04.757 "zerocopy_threshold": 0, 00:18:04.757 "tls_version": 0, 00:18:04.757 "enable_ktls": false 00:18:04.757 } 00:18:04.757 }, 00:18:04.758 { 00:18:04.758 "method": "sock_impl_set_options", 00:18:04.758 "params": { 00:18:04.758 "impl_name": "posix", 00:18:04.758 "recv_buf_size": 2097152, 00:18:04.758 "send_buf_size": 2097152, 00:18:04.758 "enable_recv_pipe": true, 00:18:04.758 "enable_quickack": false, 00:18:04.758 "enable_placement_id": 0, 00:18:04.758 "enable_zerocopy_send_server": true, 00:18:04.758 "enable_zerocopy_send_client": false, 00:18:04.758 "zerocopy_threshold": 0, 00:18:04.758 "tls_version": 0, 00:18:04.758 "enable_ktls": false 00:18:04.758 } 00:18:04.758 }, 00:18:04.758 { 00:18:04.758 "method": "sock_impl_set_options", 00:18:04.758 "params": { 00:18:04.758 "impl_name": "ssl", 00:18:04.758 "recv_buf_size": 4096, 00:18:04.758 "send_buf_size": 4096, 00:18:04.758 "enable_recv_pipe": true, 00:18:04.758 "enable_quickack": false, 00:18:04.758 "enable_placement_id": 0, 00:18:04.758 "enable_zerocopy_send_server": true, 00:18:04.758 "enable_zerocopy_send_client": false, 00:18:04.758 "zerocopy_threshold": 0, 00:18:04.758 "tls_version": 0, 00:18:04.758 "enable_ktls": false 00:18:04.758 } 00:18:04.758 } 00:18:04.758 ] 00:18:04.758 }, 00:18:04.758 { 00:18:04.758 "subsystem": "vmd", 00:18:04.758 "config": [] 00:18:04.758 }, 00:18:04.758 { 00:18:04.758 "subsystem": "accel", 00:18:04.758 "config": [ 00:18:04.758 { 00:18:04.758 "method": "accel_set_options", 00:18:04.758 "params": { 00:18:04.758 "small_cache_size": 128, 00:18:04.758 "large_cache_size": 16, 00:18:04.758 "task_count": 2048, 00:18:04.758 "sequence_count": 2048, 00:18:04.758 "buf_count": 2048 00:18:04.758 } 00:18:04.758 } 00:18:04.758 ] 00:18:04.758 }, 00:18:04.758 { 00:18:04.758 "subsystem": "bdev", 00:18:04.758 "config": [ 00:18:04.758 { 00:18:04.758 "method": "bdev_set_options", 00:18:04.758 "params": { 00:18:04.758 "bdev_io_pool_size": 65535, 00:18:04.758 "bdev_io_cache_size": 256, 00:18:04.758 "bdev_auto_examine": true, 00:18:04.758 "iobuf_small_cache_size": 128, 00:18:04.758 "iobuf_large_cache_size": 16 00:18:04.758 } 00:18:04.758 }, 00:18:04.758 { 00:18:04.758 "method": "bdev_raid_set_options", 00:18:04.758 "params": { 00:18:04.758 "process_window_size_kb": 1024 00:18:04.758 } 00:18:04.758 }, 00:18:04.758 { 00:18:04.758 "method": "bdev_iscsi_set_options", 00:18:04.758 "params": { 00:18:04.758 "timeout_sec": 30 00:18:04.758 } 00:18:04.758 }, 00:18:04.758 { 00:18:04.758 "method": "bdev_nvme_set_options", 00:18:04.758 "params": { 00:18:04.758 "action_on_timeout": "none", 00:18:04.758 "timeout_us": 0, 00:18:04.758 "timeout_admin_us": 0, 00:18:04.758 "keep_alive_timeout_ms": 10000, 00:18:04.758 "arbitration_burst": 0, 00:18:04.758 "low_priority_weight": 0, 00:18:04.758 "medium_priority_weight": 0, 00:18:04.758 "high_priority_weight": 0, 00:18:04.758 "nvme_adminq_poll_period_us": 10000, 00:18:04.758 "nvme_ioq_poll_period_us": 0, 00:18:04.758 "io_queue_requests": 512, 00:18:04.758 "delay_cmd_submit": true, 00:18:04.758 "transport_retry_count": 4, 00:18:04.758 "bdev_retry_count": 3, 00:18:04.758 "transport_ack_timeout": 0, 00:18:04.758 "ctrlr_loss_timeout_sec": 0, 00:18:04.758 "reconnect_delay_sec": 0, 00:18:04.758 "fast_io_fail_timeout_sec": 0, 00:18:04.758 "disable_auto_failback": false, 00:18:04.758 "generate_uuids": false, 00:18:04.758 "transport_tos": 0, 00:18:04.758 "nvme_error_stat": false, 00:18:04.758 "rdma_srq_size": 0, 00:18:04.758 "io_path_stat": false, 00:18:04.758 "allow_accel_sequence": false, 00:18:04.758 "rdma_max_cq_size": 0, 00:18:04.758 "rdma_cm_event_timeout_ms": 0, 00:18:04.758 "dhchap_digests": [ 00:18:04.758 "sha256", 00:18:04.758 "sha384", 00:18:04.758 "sha512" 00:18:04.758 ], 00:18:04.758 "dhchap_dhgroups": [ 00:18:04.758 "null", 00:18:04.758 "ffdhe2048", 00:18:04.758 "ffdhe3072", 00:18:04.758 "ffdhe4096", 00:18:04.758 "ffdhe6144", 00:18:04.758 "ffdhe8192" 00:18:04.758 ] 00:18:04.758 } 00:18:04.758 }, 00:18:04.758 { 00:18:04.758 "method": "bdev_nvme_attach_controller", 00:18:04.758 "params": { 00:18:04.758 "name": "TLSTEST", 00:18:04.758 "trtype": "TCP", 00:18:04.758 "adrfam": "IPv4", 00:18:04.758 "traddr": "10.0.0.2", 00:18:04.758 "trsvcid": "4420", 00:18:04.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.758 "prchk_reftag": false, 00:18:04.758 "prchk_guard": false, 00:18:04.758 "ctrlr_loss_timeout_sec": 0, 00:18:04.758 "reconnect_delay_sec": 0, 00:18:04.758 "fast_io_fail_timeout_sec": 0, 00:18:04.758 "psk": "/tmp/tmp.iZTqSWOR1x", 00:18:04.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.758 "hdgst": false, 00:18:04.758 "ddgst": false 00:18:04.758 } 00:18:04.758 }, 00:18:04.758 { 00:18:04.758 "method": "bdev_nvme_set_hotplug", 00:18:04.758 "params": { 00:18:04.758 "period_us": 100000, 00:18:04.758 "enable": false 00:18:04.758 } 00:18:04.758 }, 00:18:04.758 { 00:18:04.758 "method": "bdev_wait_for_examine" 00:18:04.758 } 00:18:04.758 ] 00:18:04.758 }, 00:18:04.758 { 00:18:04.758 "subsystem": "nbd", 00:18:04.758 "config": [] 00:18:04.758 } 00:18:04.758 ] 00:18:04.758 }' 00:18:04.758 20:09:46 -- target/tls.sh@199 -- # killprocess 70547 00:18:04.758 20:09:46 -- common/autotest_common.sh@936 -- # '[' -z 70547 ']' 00:18:04.758 20:09:46 -- common/autotest_common.sh@940 -- # kill -0 70547 00:18:04.758 20:09:46 -- common/autotest_common.sh@941 -- # uname 00:18:04.758 20:09:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.758 20:09:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70547 00:18:04.758 killing process with pid 70547 00:18:04.758 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.758 00:18:04.758 Latency(us) 00:18:04.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.758 =================================================================================================================== 00:18:04.758 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:04.758 20:09:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:04.758 20:09:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:04.758 20:09:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70547' 00:18:04.758 20:09:46 -- common/autotest_common.sh@955 -- # kill 70547 00:18:04.758 [2024-04-24 20:09:46.823893] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:04.758 20:09:46 -- common/autotest_common.sh@960 -- # wait 70547 00:18:05.022 20:09:47 -- target/tls.sh@200 -- # killprocess 70492 00:18:05.022 20:09:47 -- common/autotest_common.sh@936 -- # '[' -z 70492 ']' 00:18:05.022 20:09:47 -- common/autotest_common.sh@940 -- # kill -0 70492 00:18:05.022 20:09:47 -- common/autotest_common.sh@941 -- # uname 00:18:05.022 20:09:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:05.022 20:09:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70492 00:18:05.022 killing process with pid 70492 00:18:05.022 20:09:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:05.022 20:09:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:05.022 20:09:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70492' 00:18:05.022 20:09:47 -- common/autotest_common.sh@955 -- # kill 70492 00:18:05.022 [2024-04-24 20:09:47.082553] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:05.022 [2024-04-24 20:09:47.082597] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:05.022 20:09:47 -- common/autotest_common.sh@960 -- # wait 70492 00:18:05.282 20:09:47 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:05.282 20:09:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:05.282 20:09:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:05.282 20:09:47 -- common/autotest_common.sh@10 -- # set +x 00:18:05.282 20:09:47 -- target/tls.sh@203 -- # echo '{ 00:18:05.282 "subsystems": [ 00:18:05.282 { 00:18:05.282 "subsystem": "keyring", 00:18:05.282 "config": [] 00:18:05.282 }, 00:18:05.282 { 00:18:05.282 "subsystem": "iobuf", 00:18:05.282 "config": [ 00:18:05.282 { 00:18:05.282 "method": "iobuf_set_options", 00:18:05.282 "params": { 00:18:05.282 "small_pool_count": 8192, 00:18:05.282 "large_pool_count": 1024, 00:18:05.282 "small_bufsize": 8192, 00:18:05.282 "large_bufsize": 135168 00:18:05.282 } 00:18:05.282 } 00:18:05.282 ] 00:18:05.282 }, 00:18:05.282 { 00:18:05.282 "subsystem": "sock", 00:18:05.282 "config": [ 00:18:05.282 { 00:18:05.282 "method": "sock_impl_set_options", 00:18:05.282 "params": { 00:18:05.282 "impl_name": "uring", 00:18:05.282 "recv_buf_size": 2097152, 00:18:05.282 "send_buf_size": 2097152, 00:18:05.282 "enable_recv_pipe": true, 00:18:05.282 "enable_quickack": false, 00:18:05.282 "enable_placement_id": 0, 00:18:05.282 "enable_zerocopy_send_server": false, 00:18:05.282 "enable_zerocopy_send_client": false, 00:18:05.282 "zerocopy_threshold": 0, 00:18:05.282 "tls_version": 0, 00:18:05.282 "enable_ktls": false 00:18:05.282 } 00:18:05.282 }, 00:18:05.282 { 00:18:05.282 "method": "sock_impl_set_options", 00:18:05.282 "params": { 00:18:05.282 "impl_name": "posix", 00:18:05.282 "recv_buf_size": 2097152, 00:18:05.282 "send_buf_size": 2097152, 00:18:05.282 "enable_recv_pipe": true, 00:18:05.282 "enable_quickack": false, 00:18:05.282 "enable_placement_id": 0, 00:18:05.282 "enable_zerocopy_send_server": true, 00:18:05.282 "enable_zerocopy_send_client": false, 00:18:05.282 "zerocopy_threshold": 0, 00:18:05.282 "tls_version": 0, 00:18:05.282 "enable_ktls": false 00:18:05.282 } 00:18:05.282 }, 00:18:05.282 { 00:18:05.282 "method": "sock_impl_set_options", 00:18:05.282 "params": { 00:18:05.282 "impl_name": "ssl", 00:18:05.282 "recv_buf_size": 4096, 00:18:05.282 "send_buf_size": 4096, 00:18:05.282 "enable_recv_pipe": true, 00:18:05.282 "enable_quickack": false, 00:18:05.282 "enable_placement_id": 0, 00:18:05.282 "enable_zerocopy_send_server": true, 00:18:05.282 "enable_zerocopy_send_client": false, 00:18:05.282 "zerocopy_threshold": 0, 00:18:05.282 "tls_version": 0, 00:18:05.282 "enable_ktls": false 00:18:05.282 } 00:18:05.282 } 00:18:05.282 ] 00:18:05.282 }, 00:18:05.282 { 00:18:05.282 "subsystem": "vmd", 00:18:05.282 "config": [] 00:18:05.282 }, 00:18:05.282 { 00:18:05.282 "subsystem": "accel", 00:18:05.282 "config": [ 00:18:05.282 { 00:18:05.282 "method": "accel_set_options", 00:18:05.282 "params": { 00:18:05.282 "small_cache_size": 128, 00:18:05.282 "large_cache_size": 16, 00:18:05.282 "task_count": 2048, 00:18:05.282 "sequence_count": 2048, 00:18:05.282 "buf_count": 2048 00:18:05.282 } 00:18:05.282 } 00:18:05.282 ] 00:18:05.282 }, 00:18:05.282 { 00:18:05.282 "subsystem": "bdev", 00:18:05.282 "config": [ 00:18:05.282 { 00:18:05.282 "method": "bdev_set_options", 00:18:05.282 "params": { 00:18:05.282 "bdev_io_pool_size": 65535, 00:18:05.282 "bdev_io_cache_size": 256, 00:18:05.282 "bdev_auto_examine": true, 00:18:05.282 "iobuf_small_cache_size": 128, 00:18:05.282 "iobuf_large_cache_size": 16 00:18:05.282 } 00:18:05.282 }, 00:18:05.282 { 00:18:05.282 "method": "bdev_raid_set_options", 00:18:05.282 "params": { 00:18:05.282 "process_window_size_kb": 1024 00:18:05.282 } 00:18:05.282 }, 00:18:05.282 { 00:18:05.282 "method": "bdev_iscsi_set_options", 00:18:05.282 "params": { 00:18:05.282 "timeout_sec": 30 00:18:05.282 } 00:18:05.282 }, 00:18:05.282 { 00:18:05.282 "method": "bdev_nvme_set_options", 00:18:05.282 "params": { 00:18:05.282 "action_on_timeout": "none", 00:18:05.282 "timeout_us": 0, 00:18:05.282 "timeout_admin_us": 0, 00:18:05.282 "keep_alive_timeout_ms": 10000, 00:18:05.282 "arbitration_burst": 0, 00:18:05.282 "low_priority_weight": 0, 00:18:05.282 "medium_priority_weight": 0, 00:18:05.282 "high_priority_weight": 0, 00:18:05.282 "nvme_adminq_poll_period_us": 10000, 00:18:05.282 "nvme_ioq_poll_period_us": 0, 00:18:05.282 "io_queue_requests": 0, 00:18:05.282 "delay_cmd_submit": true, 00:18:05.282 "transport_retry_count": 4, 00:18:05.282 "bdev_retry_count": 3, 00:18:05.282 "transport_ack_timeout": 0, 00:18:05.282 "ctrlr_loss_timeout_sec": 0, 00:18:05.282 "reconnect_delay_sec": 0, 00:18:05.282 "fast_io_fail_timeout_sec": 0, 00:18:05.282 "disable_auto_failback": false, 00:18:05.282 "generate_uuids": false, 00:18:05.282 "transport_tos": 0, 00:18:05.282 "nvme_error_stat": false, 00:18:05.282 "rdma_srq_size": 0, 00:18:05.282 "io_path_stat": false, 00:18:05.282 "allow_accel_sequence": false, 00:18:05.282 "rdma_max_cq_size": 0, 00:18:05.282 "rdma_cm_event_timeout_ms": 0, 00:18:05.282 "dhchap_digests": [ 00:18:05.282 "sha256", 00:18:05.282 "sha384", 00:18:05.282 "sha512" 00:18:05.282 ], 00:18:05.282 "dhchap_dhgroups": [ 00:18:05.282 "null", 00:18:05.282 "ffdhe2048", 00:18:05.282 "ffdhe3072", 00:18:05.282 "ffdhe4096", 00:18:05.282 "ffdhe6144", 00:18:05.282 "ffdhe8192" 00:18:05.282 ] 00:18:05.282 } 00:18:05.282 }, 00:18:05.282 { 00:18:05.282 "method": "bdev_nvme_set_hotplug", 00:18:05.282 "params": { 00:18:05.282 "period_us": 100000, 00:18:05.282 "enable": false 00:18:05.282 } 00:18:05.282 }, 00:18:05.282 { 00:18:05.282 "method": "bdev_malloc_create", 00:18:05.282 "params": { 00:18:05.282 "name": "malloc0", 00:18:05.282 "num_blocks": 8192, 00:18:05.283 "block_size": 4096, 00:18:05.283 "physical_block_size": 4096, 00:18:05.283 "uuid": "c1b7ed2a-c582-4c69-bcc0-925a80ea8847", 00:18:05.283 "optimal_io_boundary": 0 00:18:05.283 } 00:18:05.283 }, 00:18:05.283 { 00:18:05.283 "method": "bdev_wait_for_examine" 00:18:05.283 } 00:18:05.283 ] 00:18:05.283 }, 00:18:05.283 { 00:18:05.283 "subsystem": "nbd", 00:18:05.283 "config": [] 00:18:05.283 }, 00:18:05.283 { 00:18:05.283 "subsystem": "scheduler", 00:18:05.283 "config": [ 00:18:05.283 { 00:18:05.283 "method": "framework_set_scheduler", 00:18:05.283 "params": { 00:18:05.283 "name": "static" 00:18:05.283 } 00:18:05.283 } 00:18:05.283 ] 00:18:05.283 }, 00:18:05.283 { 00:18:05.283 "subsystem": "nvmf", 00:18:05.283 "config": [ 00:18:05.283 { 00:18:05.283 "method": "nvmf_set_config", 00:18:05.283 "params": { 00:18:05.283 "discovery_filter": "match_any", 00:18:05.283 "admin_cmd_passthru": { 00:18:05.283 "identify_ctrlr": false 00:18:05.283 } 00:18:05.283 } 00:18:05.283 }, 00:18:05.283 { 00:18:05.283 "method": "nvmf_set_max_subsystems", 00:18:05.283 "params": { 00:18:05.283 "max_subsystems": 1024 00:18:05.283 } 00:18:05.283 }, 00:18:05.283 { 00:18:05.283 "method": "nvmf_set_crdt", 00:18:05.283 "params": { 00:18:05.283 "crdt1": 0, 00:18:05.283 "crdt2": 0, 00:18:05.283 "crdt3": 0 00:18:05.283 } 00:18:05.283 }, 00:18:05.283 { 00:18:05.283 "method": "nvmf_create_transport", 00:18:05.283 "params": { 00:18:05.283 "trtype": "TCP", 00:18:05.283 "max_queue_depth": 128, 00:18:05.283 "max_io_qpairs_per_ctrlr": 127, 00:18:05.283 "in_capsule_data_size": 4096, 00:18:05.283 "max_io_size": 131072, 00:18:05.283 "io_unit_size": 131072, 00:18:05.283 "max_aq_depth": 128, 00:18:05.283 "num_shared_buffers": 511, 00:18:05.283 "buf_cache_size": 4294967295, 00:18:05.283 "dif_insert_or_strip": false, 00:18:05.283 "zcopy": false, 00:18:05.283 "c2h_success": false, 00:18:05.283 "sock_priority": 0, 00:18:05.283 "abort_timeout_sec": 1, 00:18:05.283 "ack_timeout": 0 00:18:05.283 } 00:18:05.283 }, 00:18:05.283 { 00:18:05.283 "method": "nvmf_create_subsystem", 00:18:05.283 "params": { 00:18:05.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.283 "allow_any_host": false, 00:18:05.283 "serial_number": "SPDK00000000000001", 00:18:05.283 "model_number": "SPDK bdev Controller", 00:18:05.283 "max_namespaces": 10, 00:18:05.283 "min_cntlid": 1, 00:18:05.283 "max_cntlid": 65519, 00:18:05.283 "ana_reporting": false 00:18:05.283 } 00:18:05.283 }, 00:18:05.283 { 00:18:05.283 "method": "nvmf_subsystem_add_host", 00:18:05.283 "params": { 00:18:05.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.283 "host": "nqn.2016-06.io.spdk:host1", 00:18:05.283 "psk": "/tmp/tmp.iZTqSWOR1x" 00:18:05.283 } 00:18:05.283 }, 00:18:05.283 { 00:18:05.283 "method": "nvmf_subsystem_add_ns", 00:18:05.283 "params": { 00:18:05.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.283 "namespace": { 00:18:05.283 "nsid": 1, 00:18:05.283 "bdev_name": "malloc0", 00:18:05.283 "nguid": "C1B7ED2AC5824C69BCC0925A80EA8847", 00:18:05.283 "uuid": "c1b7ed2a-c582-4c69-bcc0-925a80ea8847", 00:18:05.283 "no_auto_visible": false 00:18:05.283 } 00:18:05.283 } 00:18:05.283 }, 00:18:05.283 { 00:18:05.283 "method": "nvmf_subsystem_add_listener", 00:18:05.283 "params": { 00:18:05.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.283 "listen_address": { 00:18:05.283 "trtype": "TCP", 00:18:05.283 "adrfam": "IPv4", 00:18:05.283 "traddr": "10.0.0.2", 00:18:05.283 "trsvcid": "4420" 00:18:05.283 }, 00:18:05.283 "secure_channel": true 00:18:05.283 } 00:18:05.283 } 00:18:05.283 ] 00:18:05.283 } 00:18:05.283 ] 00:18:05.283 }' 00:18:05.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.283 20:09:47 -- nvmf/common.sh@470 -- # nvmfpid=70590 00:18:05.283 20:09:47 -- nvmf/common.sh@471 -- # waitforlisten 70590 00:18:05.283 20:09:47 -- common/autotest_common.sh@817 -- # '[' -z 70590 ']' 00:18:05.283 20:09:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.283 20:09:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:05.283 20:09:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.283 20:09:47 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:05.283 20:09:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:05.283 20:09:47 -- common/autotest_common.sh@10 -- # set +x 00:18:05.283 [2024-04-24 20:09:47.381835] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:05.283 [2024-04-24 20:09:47.381918] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.283 [2024-04-24 20:09:47.519501] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.543 [2024-04-24 20:09:47.626834] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.543 [2024-04-24 20:09:47.626888] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.543 [2024-04-24 20:09:47.626896] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.543 [2024-04-24 20:09:47.626901] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.543 [2024-04-24 20:09:47.626907] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.543 [2024-04-24 20:09:47.626994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.803 [2024-04-24 20:09:47.839993] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.803 [2024-04-24 20:09:47.855953] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:05.803 [2024-04-24 20:09:47.871859] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:05.803 [2024-04-24 20:09:47.871982] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.803 [2024-04-24 20:09:47.872203] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.063 20:09:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:06.063 20:09:48 -- common/autotest_common.sh@850 -- # return 0 00:18:06.063 20:09:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:06.063 20:09:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:06.063 20:09:48 -- common/autotest_common.sh@10 -- # set +x 00:18:06.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.063 20:09:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.063 20:09:48 -- target/tls.sh@207 -- # bdevperf_pid=70622 00:18:06.063 20:09:48 -- target/tls.sh@208 -- # waitforlisten 70622 /var/tmp/bdevperf.sock 00:18:06.063 20:09:48 -- common/autotest_common.sh@817 -- # '[' -z 70622 ']' 00:18:06.063 20:09:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.063 20:09:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:06.063 20:09:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.063 20:09:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:06.063 20:09:48 -- common/autotest_common.sh@10 -- # set +x 00:18:06.063 20:09:48 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:06.063 20:09:48 -- target/tls.sh@204 -- # echo '{ 00:18:06.063 "subsystems": [ 00:18:06.063 { 00:18:06.063 "subsystem": "keyring", 00:18:06.063 "config": [] 00:18:06.063 }, 00:18:06.063 { 00:18:06.063 "subsystem": "iobuf", 00:18:06.063 "config": [ 00:18:06.063 { 00:18:06.063 "method": "iobuf_set_options", 00:18:06.063 "params": { 00:18:06.063 "small_pool_count": 8192, 00:18:06.063 "large_pool_count": 1024, 00:18:06.063 "small_bufsize": 8192, 00:18:06.063 "large_bufsize": 135168 00:18:06.063 } 00:18:06.063 } 00:18:06.063 ] 00:18:06.063 }, 00:18:06.063 { 00:18:06.063 "subsystem": "sock", 00:18:06.063 "config": [ 00:18:06.063 { 00:18:06.063 "method": "sock_impl_set_options", 00:18:06.063 "params": { 00:18:06.063 "impl_name": "uring", 00:18:06.063 "recv_buf_size": 2097152, 00:18:06.063 "send_buf_size": 2097152, 00:18:06.063 "enable_recv_pipe": true, 00:18:06.063 "enable_quickack": false, 00:18:06.063 "enable_placement_id": 0, 00:18:06.063 "enable_zerocopy_send_server": false, 00:18:06.063 "enable_zerocopy_send_client": false, 00:18:06.063 "zerocopy_threshold": 0, 00:18:06.063 "tls_version": 0, 00:18:06.063 "enable_ktls": false 00:18:06.063 } 00:18:06.063 }, 00:18:06.063 { 00:18:06.063 "method": "sock_impl_set_options", 00:18:06.063 "params": { 00:18:06.063 "impl_name": "posix", 00:18:06.063 "recv_buf_size": 2097152, 00:18:06.063 "send_buf_size": 2097152, 00:18:06.063 "enable_recv_pipe": true, 00:18:06.063 "enable_quickack": false, 00:18:06.063 "enable_placement_id": 0, 00:18:06.063 "enable_zerocopy_send_server": true, 00:18:06.063 "enable_zerocopy_send_client": false, 00:18:06.063 "zerocopy_threshold": 0, 00:18:06.063 "tls_version": 0, 00:18:06.063 "enable_ktls": false 00:18:06.063 } 00:18:06.063 }, 00:18:06.063 { 00:18:06.063 "method": "sock_impl_set_options", 00:18:06.063 "params": { 00:18:06.063 "impl_name": "ssl", 00:18:06.063 "recv_buf_size": 4096, 00:18:06.063 "send_buf_size": 4096, 00:18:06.063 "enable_recv_pipe": true, 00:18:06.063 "enable_quickack": false, 00:18:06.063 "enable_placement_id": 0, 00:18:06.063 "enable_zerocopy_send_server": true, 00:18:06.063 "enable_zerocopy_send_client": false, 00:18:06.063 "zerocopy_threshold": 0, 00:18:06.063 "tls_version": 0, 00:18:06.064 "enable_ktls": false 00:18:06.064 } 00:18:06.064 } 00:18:06.064 ] 00:18:06.064 }, 00:18:06.064 { 00:18:06.064 "subsystem": "vmd", 00:18:06.064 "config": [] 00:18:06.064 }, 00:18:06.064 { 00:18:06.064 "subsystem": "accel", 00:18:06.064 "config": [ 00:18:06.064 { 00:18:06.064 "method": "accel_set_options", 00:18:06.064 "params": { 00:18:06.064 "small_cache_size": 128, 00:18:06.064 "large_cache_size": 16, 00:18:06.064 "task_count": 2048, 00:18:06.064 "sequence_count": 2048, 00:18:06.064 "buf_count": 2048 00:18:06.064 } 00:18:06.064 } 00:18:06.064 ] 00:18:06.064 }, 00:18:06.064 { 00:18:06.064 "subsystem": "bdev", 00:18:06.064 "config": [ 00:18:06.064 { 00:18:06.064 "method": "bdev_set_options", 00:18:06.064 "params": { 00:18:06.064 "bdev_io_pool_size": 65535, 00:18:06.064 "bdev_io_cache_size": 256, 00:18:06.064 "bdev_auto_examine": true, 00:18:06.064 "iobuf_small_cache_size": 128, 00:18:06.064 "iobuf_large_cache_size": 16 00:18:06.064 } 00:18:06.064 }, 00:18:06.064 { 00:18:06.064 "method": "bdev_raid_set_options", 00:18:06.064 "params": { 00:18:06.064 "process_window_size_kb": 1024 00:18:06.064 } 00:18:06.064 }, 00:18:06.064 { 00:18:06.064 "method": "bdev_iscsi_set_options", 00:18:06.064 "params": { 00:18:06.064 "timeout_sec": 30 00:18:06.064 } 00:18:06.064 }, 00:18:06.064 { 00:18:06.064 "method": "bdev_nvme_set_options", 00:18:06.064 "params": { 00:18:06.064 "action_on_timeout": "none", 00:18:06.064 "timeout_us": 0, 00:18:06.064 "timeout_admin_us": 0, 00:18:06.064 "keep_alive_timeout_ms": 10000, 00:18:06.064 "arbitration_burst": 0, 00:18:06.064 "low_priority_weight": 0, 00:18:06.064 "medium_priority_weight": 0, 00:18:06.064 "high_priority_weight": 0, 00:18:06.064 "nvme_adminq_poll_period_us": 10000, 00:18:06.064 "nvme_ioq_poll_period_us": 0, 00:18:06.064 "io_queue_requests": 512, 00:18:06.064 "delay_cmd_submit": true, 00:18:06.064 "transport_retry_count": 4, 00:18:06.064 "bdev_retry_count": 3, 00:18:06.064 "transport_ack_timeout": 0, 00:18:06.064 "ctrlr_loss_timeout_sec": 0, 00:18:06.064 "reconnect_delay_sec": 0, 00:18:06.064 "fast_io_fail_timeout_sec": 0, 00:18:06.064 "disable_auto_failback": false, 00:18:06.064 "generate_uuids": false, 00:18:06.064 "transport_tos": 0, 00:18:06.064 "nvme_error_stat": false, 00:18:06.064 "rdma_srq_size": 0, 00:18:06.064 "io_path_stat": false, 00:18:06.064 "allow_accel_sequence": false, 00:18:06.064 "rdma_max_cq_size": 0, 00:18:06.064 "rdma_cm_event_timeout_ms": 0, 00:18:06.064 "dhchap_digests": [ 00:18:06.064 "sha256", 00:18:06.064 "sha384", 00:18:06.064 "sha512" 00:18:06.064 ], 00:18:06.064 "dhchap_dhgroups": [ 00:18:06.064 "null", 00:18:06.064 "ffdhe2048", 00:18:06.064 "ffdhe3072", 00:18:06.064 "ffdhe4096", 00:18:06.064 "ffdhe6144", 00:18:06.064 "ffdhe8192" 00:18:06.064 ] 00:18:06.064 } 00:18:06.064 }, 00:18:06.064 { 00:18:06.064 "method": "bdev_nvme_attach_controller", 00:18:06.064 "params": { 00:18:06.064 "name": "TLSTEST", 00:18:06.064 "trtype": "TCP", 00:18:06.064 "adrfam": "IPv4", 00:18:06.064 "traddr": "10.0.0.2", 00:18:06.064 "trsvcid": "4420", 00:18:06.064 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.064 "prchk_reftag": false, 00:18:06.064 "prchk_guard": false, 00:18:06.064 "ctrlr_loss_timeout_sec": 0, 00:18:06.064 "reconnect_delay_sec": 0, 00:18:06.064 "fast_io_fail_timeout_sec": 0, 00:18:06.064 "psk": "/tmp/tmp.iZTqSWOR1x", 00:18:06.064 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:06.064 "hdgst": false, 00:18:06.064 "ddgst": false 00:18:06.064 } 00:18:06.064 }, 00:18:06.064 { 00:18:06.064 "method": "bdev_nvme_set_hotplug", 00:18:06.064 "params": { 00:18:06.064 "period_us": 100000, 00:18:06.064 "enable": false 00:18:06.064 } 00:18:06.064 }, 00:18:06.064 { 00:18:06.064 "method": "bdev_wait_for_examine" 00:18:06.064 } 00:18:06.064 ] 00:18:06.064 }, 00:18:06.064 { 00:18:06.064 "subsystem": "nbd", 00:18:06.064 "config": [] 00:18:06.064 } 00:18:06.064 ] 00:18:06.064 }' 00:18:06.324 [2024-04-24 20:09:48.360635] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:06.324 [2024-04-24 20:09:48.360716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70622 ] 00:18:06.324 [2024-04-24 20:09:48.500177] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.582 [2024-04-24 20:09:48.608774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.582 [2024-04-24 20:09:48.760168] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.582 [2024-04-24 20:09:48.760410] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:07.149 20:09:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:07.149 20:09:49 -- common/autotest_common.sh@850 -- # return 0 00:18:07.149 20:09:49 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:07.407 Running I/O for 10 seconds... 00:18:17.417 00:18:17.417 Latency(us) 00:18:17.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.417 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:17.417 Verification LBA range: start 0x0 length 0x2000 00:18:17.417 TLSTESTn1 : 10.01 5832.91 22.78 0.00 0.00 21907.36 4206.90 17972.32 00:18:17.417 =================================================================================================================== 00:18:17.417 Total : 5832.91 22.78 0.00 0.00 21907.36 4206.90 17972.32 00:18:17.417 0 00:18:17.417 20:09:59 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:17.417 20:09:59 -- target/tls.sh@214 -- # killprocess 70622 00:18:17.417 20:09:59 -- common/autotest_common.sh@936 -- # '[' -z 70622 ']' 00:18:17.417 20:09:59 -- common/autotest_common.sh@940 -- # kill -0 70622 00:18:17.417 20:09:59 -- common/autotest_common.sh@941 -- # uname 00:18:17.417 20:09:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:17.417 20:09:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70622 00:18:17.417 20:09:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:17.417 20:09:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:17.417 20:09:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70622' 00:18:17.417 killing process with pid 70622 00:18:17.417 20:09:59 -- common/autotest_common.sh@955 -- # kill 70622 00:18:17.417 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.417 00:18:17.417 Latency(us) 00:18:17.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.417 =================================================================================================================== 00:18:17.417 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.417 [2024-04-24 20:09:59.464563] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:17.417 20:09:59 -- common/autotest_common.sh@960 -- # wait 70622 00:18:17.676 20:09:59 -- target/tls.sh@215 -- # killprocess 70590 00:18:17.676 20:09:59 -- common/autotest_common.sh@936 -- # '[' -z 70590 ']' 00:18:17.676 20:09:59 -- common/autotest_common.sh@940 -- # kill -0 70590 00:18:17.676 20:09:59 -- common/autotest_common.sh@941 -- # uname 00:18:17.676 20:09:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:17.676 20:09:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70590 00:18:17.676 20:09:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:17.676 20:09:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:17.676 20:09:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70590' 00:18:17.676 killing process with pid 70590 00:18:17.676 20:09:59 -- common/autotest_common.sh@955 -- # kill 70590 00:18:17.676 [2024-04-24 20:09:59.719477] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:17.676 [2024-04-24 20:09:59.719598] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:17.676 20:09:59 -- common/autotest_common.sh@960 -- # wait 70590 00:18:17.935 20:09:59 -- target/tls.sh@218 -- # nvmfappstart 00:18:17.935 20:09:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:17.935 20:09:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:17.935 20:09:59 -- common/autotest_common.sh@10 -- # set +x 00:18:17.935 20:09:59 -- nvmf/common.sh@470 -- # nvmfpid=70755 00:18:17.935 20:09:59 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:17.935 20:09:59 -- nvmf/common.sh@471 -- # waitforlisten 70755 00:18:17.935 20:09:59 -- common/autotest_common.sh@817 -- # '[' -z 70755 ']' 00:18:17.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.935 20:09:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.935 20:09:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:17.935 20:09:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.935 20:09:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:17.935 20:09:59 -- common/autotest_common.sh@10 -- # set +x 00:18:17.935 [2024-04-24 20:10:00.007599] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:17.935 [2024-04-24 20:10:00.007673] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.935 [2024-04-24 20:10:00.131206] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.197 [2024-04-24 20:10:00.236895] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.197 [2024-04-24 20:10:00.236962] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.197 [2024-04-24 20:10:00.236968] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.197 [2024-04-24 20:10:00.236973] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.197 [2024-04-24 20:10:00.236977] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.197 [2024-04-24 20:10:00.236998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.765 20:10:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:18.765 20:10:00 -- common/autotest_common.sh@850 -- # return 0 00:18:18.765 20:10:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:18.765 20:10:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:18.765 20:10:00 -- common/autotest_common.sh@10 -- # set +x 00:18:18.765 20:10:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.765 20:10:00 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.iZTqSWOR1x 00:18:18.765 20:10:00 -- target/tls.sh@49 -- # local key=/tmp/tmp.iZTqSWOR1x 00:18:18.765 20:10:00 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:19.024 [2024-04-24 20:10:01.049599] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.025 20:10:01 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:19.025 20:10:01 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:19.285 [2024-04-24 20:10:01.432885] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:19.285 [2024-04-24 20:10:01.432981] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:19.285 [2024-04-24 20:10:01.433135] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.285 20:10:01 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:19.545 malloc0 00:18:19.545 20:10:01 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:19.805 20:10:01 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iZTqSWOR1x 00:18:19.805 [2024-04-24 20:10:02.020537] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:19.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.805 20:10:02 -- target/tls.sh@222 -- # bdevperf_pid=70804 00:18:19.805 20:10:02 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.805 20:10:02 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:19.805 20:10:02 -- target/tls.sh@225 -- # waitforlisten 70804 /var/tmp/bdevperf.sock 00:18:19.805 20:10:02 -- common/autotest_common.sh@817 -- # '[' -z 70804 ']' 00:18:19.805 20:10:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.805 20:10:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:19.805 20:10:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.805 20:10:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:19.805 20:10:02 -- common/autotest_common.sh@10 -- # set +x 00:18:20.063 [2024-04-24 20:10:02.089006] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:20.063 [2024-04-24 20:10:02.089138] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70804 ] 00:18:20.063 [2024-04-24 20:10:02.224092] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.322 [2024-04-24 20:10:02.325309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.892 20:10:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:20.892 20:10:02 -- common/autotest_common.sh@850 -- # return 0 00:18:20.892 20:10:02 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iZTqSWOR1x 00:18:21.151 20:10:03 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:21.151 [2024-04-24 20:10:03.320643] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:21.151 nvme0n1 00:18:21.418 20:10:03 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:21.418 Running I/O for 1 seconds... 00:18:22.358 00:18:22.358 Latency(us) 00:18:22.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.358 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:22.358 Verification LBA range: start 0x0 length 0x2000 00:18:22.358 nvme0n1 : 1.01 5849.99 22.85 0.00 0.00 21725.87 4063.80 27130.19 00:18:22.358 =================================================================================================================== 00:18:22.358 Total : 5849.99 22.85 0.00 0.00 21725.87 4063.80 27130.19 00:18:22.358 0 00:18:22.358 20:10:04 -- target/tls.sh@234 -- # killprocess 70804 00:18:22.358 20:10:04 -- common/autotest_common.sh@936 -- # '[' -z 70804 ']' 00:18:22.358 20:10:04 -- common/autotest_common.sh@940 -- # kill -0 70804 00:18:22.358 20:10:04 -- common/autotest_common.sh@941 -- # uname 00:18:22.358 20:10:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:22.358 20:10:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70804 00:18:22.358 killing process with pid 70804 00:18:22.358 Received shutdown signal, test time was about 1.000000 seconds 00:18:22.358 00:18:22.358 Latency(us) 00:18:22.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.358 =================================================================================================================== 00:18:22.358 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.358 20:10:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:22.358 20:10:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:22.358 20:10:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70804' 00:18:22.358 20:10:04 -- common/autotest_common.sh@955 -- # kill 70804 00:18:22.358 20:10:04 -- common/autotest_common.sh@960 -- # wait 70804 00:18:22.617 20:10:04 -- target/tls.sh@235 -- # killprocess 70755 00:18:22.617 20:10:04 -- common/autotest_common.sh@936 -- # '[' -z 70755 ']' 00:18:22.617 20:10:04 -- common/autotest_common.sh@940 -- # kill -0 70755 00:18:22.618 20:10:04 -- common/autotest_common.sh@941 -- # uname 00:18:22.618 20:10:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:22.618 20:10:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70755 00:18:22.618 killing process with pid 70755 00:18:22.618 20:10:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:22.618 20:10:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:22.618 20:10:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70755' 00:18:22.618 20:10:04 -- common/autotest_common.sh@955 -- # kill 70755 00:18:22.618 [2024-04-24 20:10:04.836863] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:22.618 [2024-04-24 20:10:04.836902] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:22.618 20:10:04 -- common/autotest_common.sh@960 -- # wait 70755 00:18:22.877 20:10:05 -- target/tls.sh@238 -- # nvmfappstart 00:18:22.877 20:10:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:22.877 20:10:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:22.877 20:10:05 -- common/autotest_common.sh@10 -- # set +x 00:18:22.877 20:10:05 -- nvmf/common.sh@470 -- # nvmfpid=70855 00:18:22.877 20:10:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:22.877 20:10:05 -- nvmf/common.sh@471 -- # waitforlisten 70855 00:18:22.877 20:10:05 -- common/autotest_common.sh@817 -- # '[' -z 70855 ']' 00:18:22.877 20:10:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.877 20:10:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:22.877 20:10:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.877 20:10:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:22.877 20:10:05 -- common/autotest_common.sh@10 -- # set +x 00:18:22.877 [2024-04-24 20:10:05.122827] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:22.877 [2024-04-24 20:10:05.122969] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.136 [2024-04-24 20:10:05.261442] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.136 [2024-04-24 20:10:05.353702] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.136 [2024-04-24 20:10:05.353833] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.136 [2024-04-24 20:10:05.353865] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.136 [2024-04-24 20:10:05.353907] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.136 [2024-04-24 20:10:05.353922] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.136 [2024-04-24 20:10:05.353963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.706 20:10:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:23.706 20:10:05 -- common/autotest_common.sh@850 -- # return 0 00:18:23.706 20:10:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:23.706 20:10:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:23.706 20:10:05 -- common/autotest_common.sh@10 -- # set +x 00:18:23.965 20:10:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.965 20:10:06 -- target/tls.sh@239 -- # rpc_cmd 00:18:23.965 20:10:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:23.965 20:10:06 -- common/autotest_common.sh@10 -- # set +x 00:18:23.965 [2024-04-24 20:10:06.011679] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.965 malloc0 00:18:23.965 [2024-04-24 20:10:06.040156] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:23.965 [2024-04-24 20:10:06.040236] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:23.965 [2024-04-24 20:10:06.040416] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.965 20:10:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:23.965 20:10:06 -- target/tls.sh@252 -- # bdevperf_pid=70887 00:18:23.965 20:10:06 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:23.965 20:10:06 -- target/tls.sh@254 -- # waitforlisten 70887 /var/tmp/bdevperf.sock 00:18:23.965 20:10:06 -- common/autotest_common.sh@817 -- # '[' -z 70887 ']' 00:18:23.965 20:10:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.965 20:10:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:23.965 20:10:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.965 20:10:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:23.965 20:10:06 -- common/autotest_common.sh@10 -- # set +x 00:18:23.965 [2024-04-24 20:10:06.121373] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:23.965 [2024-04-24 20:10:06.121544] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70887 ] 00:18:24.225 [2024-04-24 20:10:06.257997] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.225 [2024-04-24 20:10:06.352947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.793 20:10:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:24.793 20:10:06 -- common/autotest_common.sh@850 -- # return 0 00:18:24.793 20:10:06 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iZTqSWOR1x 00:18:25.052 20:10:07 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:25.311 [2024-04-24 20:10:07.366060] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.311 nvme0n1 00:18:25.311 20:10:07 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:25.569 Running I/O for 1 seconds... 00:18:26.506 00:18:26.507 Latency(us) 00:18:26.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.507 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:26.507 Verification LBA range: start 0x0 length 0x2000 00:18:26.507 nvme0n1 : 1.01 5920.29 23.13 0.00 0.00 21458.43 4664.79 16255.22 00:18:26.507 =================================================================================================================== 00:18:26.507 Total : 5920.29 23.13 0.00 0.00 21458.43 4664.79 16255.22 00:18:26.507 0 00:18:26.507 20:10:08 -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:26.507 20:10:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.507 20:10:08 -- common/autotest_common.sh@10 -- # set +x 00:18:26.507 20:10:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.507 20:10:08 -- target/tls.sh@263 -- # tgtcfg='{ 00:18:26.507 "subsystems": [ 00:18:26.507 { 00:18:26.507 "subsystem": "keyring", 00:18:26.507 "config": [ 00:18:26.507 { 00:18:26.507 "method": "keyring_file_add_key", 00:18:26.507 "params": { 00:18:26.507 "name": "key0", 00:18:26.507 "path": "/tmp/tmp.iZTqSWOR1x" 00:18:26.507 } 00:18:26.507 } 00:18:26.507 ] 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "subsystem": "iobuf", 00:18:26.507 "config": [ 00:18:26.507 { 00:18:26.507 "method": "iobuf_set_options", 00:18:26.507 "params": { 00:18:26.507 "small_pool_count": 8192, 00:18:26.507 "large_pool_count": 1024, 00:18:26.507 "small_bufsize": 8192, 00:18:26.507 "large_bufsize": 135168 00:18:26.507 } 00:18:26.507 } 00:18:26.507 ] 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "subsystem": "sock", 00:18:26.507 "config": [ 00:18:26.507 { 00:18:26.507 "method": "sock_impl_set_options", 00:18:26.507 "params": { 00:18:26.507 "impl_name": "uring", 00:18:26.507 "recv_buf_size": 2097152, 00:18:26.507 "send_buf_size": 2097152, 00:18:26.507 "enable_recv_pipe": true, 00:18:26.507 "enable_quickack": false, 00:18:26.507 "enable_placement_id": 0, 00:18:26.507 "enable_zerocopy_send_server": false, 00:18:26.507 "enable_zerocopy_send_client": false, 00:18:26.507 "zerocopy_threshold": 0, 00:18:26.507 "tls_version": 0, 00:18:26.507 "enable_ktls": false 00:18:26.507 } 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "method": "sock_impl_set_options", 00:18:26.507 "params": { 00:18:26.507 "impl_name": "posix", 00:18:26.507 "recv_buf_size": 2097152, 00:18:26.507 "send_buf_size": 2097152, 00:18:26.507 "enable_recv_pipe": true, 00:18:26.507 "enable_quickack": false, 00:18:26.507 "enable_placement_id": 0, 00:18:26.507 "enable_zerocopy_send_server": true, 00:18:26.507 "enable_zerocopy_send_client": false, 00:18:26.507 "zerocopy_threshold": 0, 00:18:26.507 "tls_version": 0, 00:18:26.507 "enable_ktls": false 00:18:26.507 } 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "method": "sock_impl_set_options", 00:18:26.507 "params": { 00:18:26.507 "impl_name": "ssl", 00:18:26.507 "recv_buf_size": 4096, 00:18:26.507 "send_buf_size": 4096, 00:18:26.507 "enable_recv_pipe": true, 00:18:26.507 "enable_quickack": false, 00:18:26.507 "enable_placement_id": 0, 00:18:26.507 "enable_zerocopy_send_server": true, 00:18:26.507 "enable_zerocopy_send_client": false, 00:18:26.507 "zerocopy_threshold": 0, 00:18:26.507 "tls_version": 0, 00:18:26.507 "enable_ktls": false 00:18:26.507 } 00:18:26.507 } 00:18:26.507 ] 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "subsystem": "vmd", 00:18:26.507 "config": [] 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "subsystem": "accel", 00:18:26.507 "config": [ 00:18:26.507 { 00:18:26.507 "method": "accel_set_options", 00:18:26.507 "params": { 00:18:26.507 "small_cache_size": 128, 00:18:26.507 "large_cache_size": 16, 00:18:26.507 "task_count": 2048, 00:18:26.507 "sequence_count": 2048, 00:18:26.507 "buf_count": 2048 00:18:26.507 } 00:18:26.507 } 00:18:26.507 ] 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "subsystem": "bdev", 00:18:26.507 "config": [ 00:18:26.507 { 00:18:26.507 "method": "bdev_set_options", 00:18:26.507 "params": { 00:18:26.507 "bdev_io_pool_size": 65535, 00:18:26.507 "bdev_io_cache_size": 256, 00:18:26.507 "bdev_auto_examine": true, 00:18:26.507 "iobuf_small_cache_size": 128, 00:18:26.507 "iobuf_large_cache_size": 16 00:18:26.507 } 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "method": "bdev_raid_set_options", 00:18:26.507 "params": { 00:18:26.507 "process_window_size_kb": 1024 00:18:26.507 } 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "method": "bdev_iscsi_set_options", 00:18:26.507 "params": { 00:18:26.507 "timeout_sec": 30 00:18:26.507 } 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "method": "bdev_nvme_set_options", 00:18:26.507 "params": { 00:18:26.507 "action_on_timeout": "none", 00:18:26.507 "timeout_us": 0, 00:18:26.507 "timeout_admin_us": 0, 00:18:26.507 "keep_alive_timeout_ms": 10000, 00:18:26.507 "arbitration_burst": 0, 00:18:26.507 "low_priority_weight": 0, 00:18:26.507 "medium_priority_weight": 0, 00:18:26.507 "high_priority_weight": 0, 00:18:26.507 "nvme_adminq_poll_period_us": 10000, 00:18:26.507 "nvme_ioq_poll_period_us": 0, 00:18:26.507 "io_queue_requests": 0, 00:18:26.507 "delay_cmd_submit": true, 00:18:26.507 "transport_retry_count": 4, 00:18:26.507 "bdev_retry_count": 3, 00:18:26.507 "transport_ack_timeout": 0, 00:18:26.507 "ctrlr_loss_timeout_sec": 0, 00:18:26.507 "reconnect_delay_sec": 0, 00:18:26.507 "fast_io_fail_timeout_sec": 0, 00:18:26.507 "disable_auto_failback": false, 00:18:26.507 "generate_uuids": false, 00:18:26.507 "transport_tos": 0, 00:18:26.507 "nvme_error_stat": false, 00:18:26.507 "rdma_srq_size": 0, 00:18:26.507 "io_path_stat": false, 00:18:26.507 "allow_accel_sequence": false, 00:18:26.507 "rdma_max_cq_size": 0, 00:18:26.507 "rdma_cm_event_timeout_ms": 0, 00:18:26.507 "dhchap_digests": [ 00:18:26.507 "sha256", 00:18:26.507 "sha384", 00:18:26.507 "sha512" 00:18:26.507 ], 00:18:26.507 "dhchap_dhgroups": [ 00:18:26.507 "null", 00:18:26.507 "ffdhe2048", 00:18:26.507 "ffdhe3072", 00:18:26.507 "ffdhe4096", 00:18:26.507 "ffdhe6144", 00:18:26.507 "ffdhe8192" 00:18:26.507 ] 00:18:26.507 } 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "method": "bdev_nvme_set_hotplug", 00:18:26.507 "params": { 00:18:26.507 "period_us": 100000, 00:18:26.507 "enable": false 00:18:26.507 } 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "method": "bdev_malloc_create", 00:18:26.507 "params": { 00:18:26.507 "name": "malloc0", 00:18:26.507 "num_blocks": 8192, 00:18:26.507 "block_size": 4096, 00:18:26.507 "physical_block_size": 4096, 00:18:26.507 "uuid": "4fa11c87-f4cf-4bb1-bd3c-72f6edc6c23c", 00:18:26.507 "optimal_io_boundary": 0 00:18:26.507 } 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "method": "bdev_wait_for_examine" 00:18:26.507 } 00:18:26.507 ] 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "subsystem": "nbd", 00:18:26.507 "config": [] 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "subsystem": "scheduler", 00:18:26.507 "config": [ 00:18:26.507 { 00:18:26.507 "method": "framework_set_scheduler", 00:18:26.507 "params": { 00:18:26.507 "name": "static" 00:18:26.507 } 00:18:26.507 } 00:18:26.507 ] 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "subsystem": "nvmf", 00:18:26.507 "config": [ 00:18:26.507 { 00:18:26.507 "method": "nvmf_set_config", 00:18:26.507 "params": { 00:18:26.507 "discovery_filter": "match_any", 00:18:26.507 "admin_cmd_passthru": { 00:18:26.507 "identify_ctrlr": false 00:18:26.507 } 00:18:26.507 } 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "method": "nvmf_set_max_subsystems", 00:18:26.507 "params": { 00:18:26.507 "max_subsystems": 1024 00:18:26.507 } 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "method": "nvmf_set_crdt", 00:18:26.507 "params": { 00:18:26.507 "crdt1": 0, 00:18:26.507 "crdt2": 0, 00:18:26.507 "crdt3": 0 00:18:26.507 } 00:18:26.507 }, 00:18:26.507 { 00:18:26.507 "method": "nvmf_create_transport", 00:18:26.507 "params": { 00:18:26.508 "trtype": "TCP", 00:18:26.508 "max_queue_depth": 128, 00:18:26.508 "max_io_qpairs_per_ctrlr": 127, 00:18:26.508 "in_capsule_data_size": 4096, 00:18:26.508 "max_io_size": 131072, 00:18:26.508 "io_unit_size": 131072, 00:18:26.508 "max_aq_depth": 128, 00:18:26.508 "num_shared_buffers": 511, 00:18:26.508 "buf_cache_size": 4294967295, 00:18:26.508 "dif_insert_or_strip": false, 00:18:26.508 "zcopy": false, 00:18:26.508 "c2h_success": false, 00:18:26.508 "sock_priority": 0, 00:18:26.508 "abort_timeout_sec": 1, 00:18:26.508 "ack_timeout": 0 00:18:26.508 } 00:18:26.508 }, 00:18:26.508 { 00:18:26.508 "method": "nvmf_create_subsystem", 00:18:26.508 "params": { 00:18:26.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.508 "allow_any_host": false, 00:18:26.508 "serial_number": "00000000000000000000", 00:18:26.508 "model_number": "SPDK bdev Controller", 00:18:26.508 "max_namespaces": 32, 00:18:26.508 "min_cntlid": 1, 00:18:26.508 "max_cntlid": 65519, 00:18:26.508 "ana_reporting": false 00:18:26.508 } 00:18:26.508 }, 00:18:26.508 { 00:18:26.508 "method": "nvmf_subsystem_add_host", 00:18:26.508 "params": { 00:18:26.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.508 "host": "nqn.2016-06.io.spdk:host1", 00:18:26.508 "psk": "key0" 00:18:26.508 } 00:18:26.508 }, 00:18:26.508 { 00:18:26.508 "method": "nvmf_subsystem_add_ns", 00:18:26.508 "params": { 00:18:26.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.508 "namespace": { 00:18:26.508 "nsid": 1, 00:18:26.508 "bdev_name": "malloc0", 00:18:26.508 "nguid": "4FA11C87F4CF4BB1BD3C72F6EDC6C23C", 00:18:26.508 "uuid": "4fa11c87-f4cf-4bb1-bd3c-72f6edc6c23c", 00:18:26.508 "no_auto_visible": false 00:18:26.508 } 00:18:26.508 } 00:18:26.508 }, 00:18:26.508 { 00:18:26.508 "method": "nvmf_subsystem_add_listener", 00:18:26.508 "params": { 00:18:26.508 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.508 "listen_address": { 00:18:26.508 "trtype": "TCP", 00:18:26.508 "adrfam": "IPv4", 00:18:26.508 "traddr": "10.0.0.2", 00:18:26.508 "trsvcid": "4420" 00:18:26.508 }, 00:18:26.508 "secure_channel": true 00:18:26.508 } 00:18:26.508 } 00:18:26.508 ] 00:18:26.508 } 00:18:26.508 ] 00:18:26.508 }' 00:18:26.508 20:10:08 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:26.767 20:10:08 -- target/tls.sh@264 -- # bperfcfg='{ 00:18:26.767 "subsystems": [ 00:18:26.767 { 00:18:26.767 "subsystem": "keyring", 00:18:26.767 "config": [ 00:18:26.767 { 00:18:26.767 "method": "keyring_file_add_key", 00:18:26.767 "params": { 00:18:26.767 "name": "key0", 00:18:26.767 "path": "/tmp/tmp.iZTqSWOR1x" 00:18:26.767 } 00:18:26.767 } 00:18:26.767 ] 00:18:26.767 }, 00:18:26.767 { 00:18:26.767 "subsystem": "iobuf", 00:18:26.767 "config": [ 00:18:26.767 { 00:18:26.767 "method": "iobuf_set_options", 00:18:26.767 "params": { 00:18:26.767 "small_pool_count": 8192, 00:18:26.767 "large_pool_count": 1024, 00:18:26.767 "small_bufsize": 8192, 00:18:26.767 "large_bufsize": 135168 00:18:26.767 } 00:18:26.767 } 00:18:26.767 ] 00:18:26.767 }, 00:18:26.767 { 00:18:26.767 "subsystem": "sock", 00:18:26.767 "config": [ 00:18:26.767 { 00:18:26.767 "method": "sock_impl_set_options", 00:18:26.767 "params": { 00:18:26.767 "impl_name": "uring", 00:18:26.767 "recv_buf_size": 2097152, 00:18:26.767 "send_buf_size": 2097152, 00:18:26.767 "enable_recv_pipe": true, 00:18:26.767 "enable_quickack": false, 00:18:26.767 "enable_placement_id": 0, 00:18:26.767 "enable_zerocopy_send_server": false, 00:18:26.767 "enable_zerocopy_send_client": false, 00:18:26.767 "zerocopy_threshold": 0, 00:18:26.767 "tls_version": 0, 00:18:26.767 "enable_ktls": false 00:18:26.767 } 00:18:26.767 }, 00:18:26.767 { 00:18:26.767 "method": "sock_impl_set_options", 00:18:26.767 "params": { 00:18:26.767 "impl_name": "posix", 00:18:26.767 "recv_buf_size": 2097152, 00:18:26.767 "send_buf_size": 2097152, 00:18:26.767 "enable_recv_pipe": true, 00:18:26.767 "enable_quickack": false, 00:18:26.767 "enable_placement_id": 0, 00:18:26.767 "enable_zerocopy_send_server": true, 00:18:26.767 "enable_zerocopy_send_client": false, 00:18:26.767 "zerocopy_threshold": 0, 00:18:26.767 "tls_version": 0, 00:18:26.767 "enable_ktls": false 00:18:26.767 } 00:18:26.767 }, 00:18:26.767 { 00:18:26.767 "method": "sock_impl_set_options", 00:18:26.767 "params": { 00:18:26.767 "impl_name": "ssl", 00:18:26.767 "recv_buf_size": 4096, 00:18:26.767 "send_buf_size": 4096, 00:18:26.767 "enable_recv_pipe": true, 00:18:26.767 "enable_quickack": false, 00:18:26.767 "enable_placement_id": 0, 00:18:26.767 "enable_zerocopy_send_server": true, 00:18:26.767 "enable_zerocopy_send_client": false, 00:18:26.767 "zerocopy_threshold": 0, 00:18:26.767 "tls_version": 0, 00:18:26.767 "enable_ktls": false 00:18:26.767 } 00:18:26.767 } 00:18:26.767 ] 00:18:26.767 }, 00:18:26.767 { 00:18:26.767 "subsystem": "vmd", 00:18:26.767 "config": [] 00:18:26.767 }, 00:18:26.768 { 00:18:26.768 "subsystem": "accel", 00:18:26.768 "config": [ 00:18:26.768 { 00:18:26.768 "method": "accel_set_options", 00:18:26.768 "params": { 00:18:26.768 "small_cache_size": 128, 00:18:26.768 "large_cache_size": 16, 00:18:26.768 "task_count": 2048, 00:18:26.768 "sequence_count": 2048, 00:18:26.768 "buf_count": 2048 00:18:26.768 } 00:18:26.768 } 00:18:26.768 ] 00:18:26.768 }, 00:18:26.768 { 00:18:26.768 "subsystem": "bdev", 00:18:26.768 "config": [ 00:18:26.768 { 00:18:26.768 "method": "bdev_set_options", 00:18:26.768 "params": { 00:18:26.768 "bdev_io_pool_size": 65535, 00:18:26.768 "bdev_io_cache_size": 256, 00:18:26.768 "bdev_auto_examine": true, 00:18:26.768 "iobuf_small_cache_size": 128, 00:18:26.768 "iobuf_large_cache_size": 16 00:18:26.768 } 00:18:26.768 }, 00:18:26.768 { 00:18:26.768 "method": "bdev_raid_set_options", 00:18:26.768 "params": { 00:18:26.768 "process_window_size_kb": 1024 00:18:26.768 } 00:18:26.768 }, 00:18:26.768 { 00:18:26.768 "method": "bdev_iscsi_set_options", 00:18:26.768 "params": { 00:18:26.768 "timeout_sec": 30 00:18:26.768 } 00:18:26.768 }, 00:18:26.768 { 00:18:26.768 "method": "bdev_nvme_set_options", 00:18:26.768 "params": { 00:18:26.768 "action_on_timeout": "none", 00:18:26.768 "timeout_us": 0, 00:18:26.768 "timeout_admin_us": 0, 00:18:26.768 "keep_alive_timeout_ms": 10000, 00:18:26.768 "arbitration_burst": 0, 00:18:26.768 "low_priority_weight": 0, 00:18:26.768 "medium_priority_weight": 0, 00:18:26.768 "high_priority_weight": 0, 00:18:26.768 "nvme_adminq_poll_period_us": 10000, 00:18:26.768 "nvme_ioq_poll_period_us": 0, 00:18:26.768 "io_queue_requests": 512, 00:18:26.768 "delay_cmd_submit": true, 00:18:26.768 "transport_retry_count": 4, 00:18:26.768 "bdev_retry_count": 3, 00:18:26.768 "transport_ack_timeout": 0, 00:18:26.768 "ctrlr_loss_timeout_sec": 0, 00:18:26.768 "reconnect_delay_sec": 0, 00:18:26.768 "fast_io_fail_timeout_sec": 0, 00:18:26.768 "disable_auto_failback": false, 00:18:26.768 "generate_uuids": false, 00:18:26.768 "transport_tos": 0, 00:18:26.768 "nvme_error_stat": false, 00:18:26.768 "rdma_srq_size": 0, 00:18:26.768 "io_path_stat": false, 00:18:26.768 "allow_accel_sequence": false, 00:18:26.768 "rdma_max_cq_size": 0, 00:18:26.768 "rdma_cm_event_timeout_ms": 0, 00:18:26.768 "dhchap_digests": [ 00:18:26.768 "sha256", 00:18:26.768 "sha384", 00:18:26.768 "sha512" 00:18:26.768 ], 00:18:26.768 "dhchap_dhgroups": [ 00:18:26.768 "null", 00:18:26.768 "ffdhe2048", 00:18:26.768 "ffdhe3072", 00:18:26.768 "ffdhe4096", 00:18:26.768 "ffdhe6144", 00:18:26.768 "ffdhe8192" 00:18:26.768 ] 00:18:26.768 } 00:18:26.768 }, 00:18:26.768 { 00:18:26.768 "method": "bdev_nvme_attach_controller", 00:18:26.768 "params": { 00:18:26.768 "name": "nvme0", 00:18:26.768 "trtype": "TCP", 00:18:26.768 "adrfam": "IPv4", 00:18:26.768 "traddr": "10.0.0.2", 00:18:26.768 "trsvcid": "4420", 00:18:26.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.768 "prchk_reftag": false, 00:18:26.768 "prchk_guard": false, 00:18:26.768 "ctrlr_loss_timeout_sec": 0, 00:18:26.768 "reconnect_delay_sec": 0, 00:18:26.768 "fast_io_fail_timeout_sec": 0, 00:18:26.768 "psk": "key0", 00:18:26.768 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:26.768 "hdgst": false, 00:18:26.768 "ddgst": false 00:18:26.768 } 00:18:26.768 }, 00:18:26.768 { 00:18:26.768 "method": "bdev_nvme_set_hotplug", 00:18:26.768 "params": { 00:18:26.768 "period_us": 100000, 00:18:26.768 "enable": false 00:18:26.768 } 00:18:26.768 }, 00:18:26.768 { 00:18:26.768 "method": "bdev_enable_histogram", 00:18:26.768 "params": { 00:18:26.768 "name": "nvme0n1", 00:18:26.768 "enable": true 00:18:26.768 } 00:18:26.768 }, 00:18:26.768 { 00:18:26.768 "method": "bdev_wait_for_examine" 00:18:26.768 } 00:18:26.768 ] 00:18:26.768 }, 00:18:26.768 { 00:18:26.768 "subsystem": "nbd", 00:18:26.768 "config": [] 00:18:26.768 } 00:18:26.768 ] 00:18:26.768 }' 00:18:26.768 20:10:08 -- target/tls.sh@266 -- # killprocess 70887 00:18:26.768 20:10:08 -- common/autotest_common.sh@936 -- # '[' -z 70887 ']' 00:18:26.768 20:10:08 -- common/autotest_common.sh@940 -- # kill -0 70887 00:18:26.768 20:10:08 -- common/autotest_common.sh@941 -- # uname 00:18:26.768 20:10:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:26.768 20:10:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70887 00:18:27.027 20:10:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:27.027 20:10:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:27.027 killing process with pid 70887 00:18:27.027 20:10:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70887' 00:18:27.027 20:10:09 -- common/autotest_common.sh@955 -- # kill 70887 00:18:27.027 Received shutdown signal, test time was about 1.000000 seconds 00:18:27.027 00:18:27.027 Latency(us) 00:18:27.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.027 =================================================================================================================== 00:18:27.027 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:27.027 20:10:09 -- common/autotest_common.sh@960 -- # wait 70887 00:18:27.027 20:10:09 -- target/tls.sh@267 -- # killprocess 70855 00:18:27.027 20:10:09 -- common/autotest_common.sh@936 -- # '[' -z 70855 ']' 00:18:27.027 20:10:09 -- common/autotest_common.sh@940 -- # kill -0 70855 00:18:27.027 20:10:09 -- common/autotest_common.sh@941 -- # uname 00:18:27.027 20:10:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:27.027 20:10:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70855 00:18:27.287 killing process with pid 70855 00:18:27.287 20:10:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:27.287 20:10:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:27.287 20:10:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70855' 00:18:27.287 20:10:09 -- common/autotest_common.sh@955 -- # kill 70855 00:18:27.287 [2024-04-24 20:10:09.285687] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:27.287 20:10:09 -- common/autotest_common.sh@960 -- # wait 70855 00:18:27.287 20:10:09 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:27.287 20:10:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:27.287 20:10:09 -- target/tls.sh@269 -- # echo '{ 00:18:27.287 "subsystems": [ 00:18:27.287 { 00:18:27.287 "subsystem": "keyring", 00:18:27.287 "config": [ 00:18:27.287 { 00:18:27.287 "method": "keyring_file_add_key", 00:18:27.287 "params": { 00:18:27.287 "name": "key0", 00:18:27.287 "path": "/tmp/tmp.iZTqSWOR1x" 00:18:27.287 } 00:18:27.287 } 00:18:27.287 ] 00:18:27.287 }, 00:18:27.287 { 00:18:27.287 "subsystem": "iobuf", 00:18:27.287 "config": [ 00:18:27.287 { 00:18:27.287 "method": "iobuf_set_options", 00:18:27.287 "params": { 00:18:27.287 "small_pool_count": 8192, 00:18:27.287 "large_pool_count": 1024, 00:18:27.287 "small_bufsize": 8192, 00:18:27.287 "large_bufsize": 135168 00:18:27.287 } 00:18:27.287 } 00:18:27.287 ] 00:18:27.287 }, 00:18:27.287 { 00:18:27.287 "subsystem": "sock", 00:18:27.287 "config": [ 00:18:27.287 { 00:18:27.287 "method": "sock_impl_set_options", 00:18:27.287 "params": { 00:18:27.287 "impl_name": "uring", 00:18:27.287 "recv_buf_size": 2097152, 00:18:27.287 "send_buf_size": 2097152, 00:18:27.287 "enable_recv_pipe": true, 00:18:27.287 "enable_quickack": false, 00:18:27.287 "enable_placement_id": 0, 00:18:27.287 "enable_zerocopy_send_server": false, 00:18:27.287 "enable_zerocopy_send_client": false, 00:18:27.287 "zerocopy_threshold": 0, 00:18:27.287 "tls_version": 0, 00:18:27.287 "enable_ktls": false 00:18:27.287 } 00:18:27.287 }, 00:18:27.287 { 00:18:27.287 "method": "sock_impl_set_options", 00:18:27.287 "params": { 00:18:27.287 "impl_name": "posix", 00:18:27.287 "recv_buf_size": 2097152, 00:18:27.287 "send_buf_size": 2097152, 00:18:27.287 "enable_recv_pipe": true, 00:18:27.287 "enable_quickack": false, 00:18:27.287 "enable_placement_id": 0, 00:18:27.287 "enable_zerocopy_send_server": true, 00:18:27.287 "enable_zerocopy_send_client": false, 00:18:27.287 "zerocopy_threshold": 0, 00:18:27.287 "tls_version": 0, 00:18:27.287 "enable_ktls": false 00:18:27.287 } 00:18:27.287 }, 00:18:27.287 { 00:18:27.287 "method": "sock_impl_set_options", 00:18:27.287 "params": { 00:18:27.287 "impl_name": "ssl", 00:18:27.287 "recv_buf_size": 4096, 00:18:27.287 "send_buf_size": 4096, 00:18:27.287 "enable_recv_pipe": true, 00:18:27.287 "enable_quickack": false, 00:18:27.287 "enable_placement_id": 0, 00:18:27.287 "enable_zerocopy_send_server": true, 00:18:27.287 "enable_zerocopy_send_client": false, 00:18:27.287 "zerocopy_threshold": 0, 00:18:27.287 "tls_version": 0, 00:18:27.287 "enable_ktls": false 00:18:27.287 } 00:18:27.287 } 00:18:27.287 ] 00:18:27.287 }, 00:18:27.287 { 00:18:27.287 "subsystem": "vmd", 00:18:27.287 "config": [] 00:18:27.287 }, 00:18:27.287 { 00:18:27.287 "subsystem": "accel", 00:18:27.287 "config": [ 00:18:27.287 { 00:18:27.287 "method": "accel_set_options", 00:18:27.287 "params": { 00:18:27.287 "small_cache_size": 128, 00:18:27.287 "large_cache_size": 16, 00:18:27.287 "task_count": 2048, 00:18:27.287 "sequence_count": 2048, 00:18:27.287 "buf_count": 2048 00:18:27.287 } 00:18:27.287 } 00:18:27.287 ] 00:18:27.287 }, 00:18:27.287 { 00:18:27.287 "subsystem": "bdev", 00:18:27.287 "config": [ 00:18:27.287 { 00:18:27.287 "method": "bdev_set_options", 00:18:27.287 "params": { 00:18:27.287 "bdev_io_pool_size": 65535, 00:18:27.287 "bdev_io_cache_size": 256, 00:18:27.287 "bdev_auto_examine": true, 00:18:27.287 "iobuf_small_cache_size": 128, 00:18:27.287 "iobuf_large_cache_size": 16 00:18:27.287 } 00:18:27.287 }, 00:18:27.287 { 00:18:27.287 "method": "bdev_raid_set_options", 00:18:27.287 "params": { 00:18:27.287 "process_window_size_kb": 1024 00:18:27.287 } 00:18:27.287 }, 00:18:27.287 { 00:18:27.287 "method": "bdev_iscsi_set_options", 00:18:27.287 "params": { 00:18:27.287 "timeout_sec": 30 00:18:27.287 } 00:18:27.287 }, 00:18:27.287 { 00:18:27.287 "method": "bdev_nvme_set_options", 00:18:27.287 "params": { 00:18:27.287 "action_on_timeout": "none", 00:18:27.287 "timeout_us": 0, 00:18:27.287 "timeout_admin_us": 0, 00:18:27.287 "keep_alive_timeout_ms": 10000, 00:18:27.287 "arbitration_burst": 0, 00:18:27.287 "low_priority_weight": 0, 00:18:27.287 "medium_priority_weight": 0, 00:18:27.287 "high_priority_weight": 0, 00:18:27.287 "nvme_adminq_poll_period_us": 10000, 00:18:27.287 "nvme_ioq_poll_period_us": 0, 00:18:27.287 "io_queue_requests": 0, 00:18:27.287 "delay_cmd_submit": true, 00:18:27.287 "transport_retry_count": 4, 00:18:27.287 "bdev_retry_count": 3, 00:18:27.287 "transport_ack_timeout": 0, 00:18:27.287 "ctrlr_loss_timeout_sec": 0, 00:18:27.287 "reconnect_delay_sec": 0, 00:18:27.287 "fast_io_fail_timeout_sec": 0, 00:18:27.287 "disable_auto_failback": false, 00:18:27.287 "generate_uuids": false, 00:18:27.287 "transport_tos": 0, 00:18:27.287 "nvme_error_stat": false, 00:18:27.287 "rdma_srq_size": 0, 00:18:27.287 "io_path_stat": false, 00:18:27.287 "allow_accel_sequence": false, 00:18:27.287 "rdma_max_cq_size": 0, 00:18:27.287 "rdma_cm_event_timeout_ms": 0, 00:18:27.287 "dhchap_digests": [ 00:18:27.287 "sha256", 00:18:27.287 "sha384", 00:18:27.287 "sha512" 00:18:27.287 ], 00:18:27.287 "dhchap_dhgroups": [ 00:18:27.287 "null", 00:18:27.287 "ffdhe2048", 00:18:27.287 "ffdhe3072", 00:18:27.287 "ffdhe4096", 00:18:27.287 "ffdhe6144", 00:18:27.287 "ffdhe8192" 00:18:27.287 ] 00:18:27.287 } 00:18:27.287 }, 00:18:27.287 { 00:18:27.287 "method": "bdev_nvme_set_hotplug", 00:18:27.287 "params": { 00:18:27.287 "period_us": 100000, 00:18:27.287 "enable": false 00:18:27.287 } 00:18:27.287 }, 00:18:27.287 { 00:18:27.287 "method": "bdev_malloc_create", 00:18:27.287 "params": { 00:18:27.287 "name": "malloc0", 00:18:27.287 "num_blocks": 8192, 00:18:27.287 "block_size": 4096, 00:18:27.287 "physical_block_size": 4096, 00:18:27.287 "uuid": "4fa11c87-f4cf-4bb1-bd3c-72f6edc6c23c", 00:18:27.287 "optimal_io_boundary": 0 00:18:27.287 } 00:18:27.287 }, 00:18:27.287 { 00:18:27.287 "method": "bdev_wait_for_examine" 00:18:27.287 } 00:18:27.287 ] 00:18:27.287 }, 00:18:27.287 { 00:18:27.287 "subsystem": "nbd", 00:18:27.287 "config": [] 00:18:27.287 }, 00:18:27.287 { 00:18:27.288 "subsystem": "scheduler", 00:18:27.288 "config": [ 00:18:27.288 { 00:18:27.288 "method": "framework_set_scheduler", 00:18:27.288 "params": { 00:18:27.288 "name": "static" 00:18:27.288 } 00:18:27.288 } 00:18:27.288 ] 00:18:27.288 }, 00:18:27.288 { 00:18:27.288 "subsystem": "nvmf", 00:18:27.288 "config": [ 00:18:27.288 { 00:18:27.288 "method": "nvmf_set_config", 00:18:27.288 "params": { 00:18:27.288 "discovery_filter": "match_any", 00:18:27.288 "admin_cmd_passthru": { 00:18:27.288 "identify_ctrlr": false 00:18:27.288 } 00:18:27.288 } 00:18:27.288 }, 00:18:27.288 { 00:18:27.288 "method": "nvmf_set_max_subsystems", 00:18:27.288 "params": { 00:18:27.288 "max_subsystems": 1024 00:18:27.288 } 00:18:27.288 }, 00:18:27.288 { 00:18:27.288 "method": "nvmf_set_crdt", 00:18:27.288 "params": { 00:18:27.288 "crdt1": 0, 00:18:27.288 "crdt2": 0, 00:18:27.288 "crdt3": 0 00:18:27.288 } 00:18:27.288 }, 00:18:27.288 { 00:18:27.288 "method": "nvmf_create_transport", 00:18:27.288 "params": { 00:18:27.288 "trtype": "TCP", 00:18:27.288 "max_queue_depth": 128, 00:18:27.288 "max_io_qpairs_per_ctrlr": 127, 00:18:27.288 "in_capsule_data_size": 4096, 00:18:27.288 "max_io_size": 131072, 00:18:27.288 "io_unit_size": 131072, 00:18:27.288 "max_aq_depth": 128, 00:18:27.288 "num_shared_buffers": 511, 00:18:27.288 "buf_cache_size": 4294967295, 00:18:27.288 "dif_insert_or_strip": false, 00:18:27.288 "zcopy": false, 00:18:27.288 "c2h_success": false, 00:18:27.288 "sock_priority": 0, 00:18:27.288 "abort_timeout_sec": 1, 00:18:27.288 "ack_timeout": 0 00:18:27.288 } 00:18:27.288 }, 00:18:27.288 { 00:18:27.288 "method": "nvmf_create_subsystem", 00:18:27.288 "params": { 00:18:27.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.288 "allow_any_host": false, 00:18:27.288 "serial_number": "00000000000000000000", 00:18:27.288 "model_number": "SPDK bdev Controller", 00:18:27.288 "max_namespaces": 32, 00:18:27.288 "min_cntlid": 1, 00:18:27.288 "max_cntlid": 65519, 00:18:27.288 "ana_reporting": false 00:18:27.288 } 00:18:27.288 }, 00:18:27.288 { 00:18:27.288 "method": "nvmf_subsystem_add_host", 00:18:27.288 "params": { 00:18:27.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.288 "host": "nqn.2016-06.io.spdk:host1", 00:18:27.288 "psk": "key0" 00:18:27.288 } 00:18:27.288 }, 00:18:27.288 { 00:18:27.288 "method": "nvmf_subsystem_add_ns", 00:18:27.288 "params": { 00:18:27.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.288 "namespace": { 00:18:27.288 "nsid": 1, 00:18:27.288 "bdev_name": "malloc0", 00:18:27.288 "nguid": "4FA11C87F4CF4BB1BD3C72F6EDC6C23C", 00:18:27.288 "uuid": "4fa11c87-f4cf-4bb1-bd3c-72f6edc6c23c", 00:18:27.288 "no_auto_visible": false 00:18:27.288 } 00:18:27.288 } 00:18:27.288 }, 00:18:27.288 { 00:18:27.288 "method": "nvmf_subsystem_add_listener", 00:18:27.288 "params": { 00:18:27.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.288 "listen_address": { 00:18:27.288 "trtype": "TCP", 00:18:27.288 "adrfam": "IPv4", 00:18:27.288 "traddr": "10.0.0.2", 00:18:27.288 "trsvcid": "4420" 00:18:27.288 }, 00:18:27.288 "secure_channel": true 00:18:27.288 } 00:18:27.288 } 00:18:27.288 ] 00:18:27.288 } 00:18:27.288 ] 00:18:27.288 }' 00:18:27.288 20:10:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:27.288 20:10:09 -- common/autotest_common.sh@10 -- # set +x 00:18:27.288 20:10:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:27.288 20:10:09 -- nvmf/common.sh@470 -- # nvmfpid=70948 00:18:27.288 20:10:09 -- nvmf/common.sh@471 -- # waitforlisten 70948 00:18:27.288 20:10:09 -- common/autotest_common.sh@817 -- # '[' -z 70948 ']' 00:18:27.288 20:10:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.288 20:10:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:27.288 20:10:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.288 20:10:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:27.288 20:10:09 -- common/autotest_common.sh@10 -- # set +x 00:18:27.547 [2024-04-24 20:10:09.568182] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:27.547 [2024-04-24 20:10:09.568254] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.547 [2024-04-24 20:10:09.706151] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.547 [2024-04-24 20:10:09.795084] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.547 [2024-04-24 20:10:09.795223] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.547 [2024-04-24 20:10:09.795260] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.547 [2024-04-24 20:10:09.795286] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.547 [2024-04-24 20:10:09.795301] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.547 [2024-04-24 20:10:09.795395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.806 [2024-04-24 20:10:10.007535] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.806 [2024-04-24 20:10:10.039426] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:27.806 [2024-04-24 20:10:10.039533] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:27.806 [2024-04-24 20:10:10.039715] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.375 20:10:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:28.375 20:10:10 -- common/autotest_common.sh@850 -- # return 0 00:18:28.375 20:10:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:28.375 20:10:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:28.375 20:10:10 -- common/autotest_common.sh@10 -- # set +x 00:18:28.375 20:10:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.375 20:10:10 -- target/tls.sh@272 -- # bdevperf_pid=70980 00:18:28.375 20:10:10 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:28.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.375 20:10:10 -- target/tls.sh@273 -- # waitforlisten 70980 /var/tmp/bdevperf.sock 00:18:28.375 20:10:10 -- common/autotest_common.sh@817 -- # '[' -z 70980 ']' 00:18:28.375 20:10:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.375 20:10:10 -- target/tls.sh@270 -- # echo '{ 00:18:28.375 "subsystems": [ 00:18:28.375 { 00:18:28.375 "subsystem": "keyring", 00:18:28.375 "config": [ 00:18:28.375 { 00:18:28.375 "method": "keyring_file_add_key", 00:18:28.375 "params": { 00:18:28.375 "name": "key0", 00:18:28.375 "path": "/tmp/tmp.iZTqSWOR1x" 00:18:28.375 } 00:18:28.375 } 00:18:28.375 ] 00:18:28.375 }, 00:18:28.375 { 00:18:28.375 "subsystem": "iobuf", 00:18:28.375 "config": [ 00:18:28.375 { 00:18:28.375 "method": "iobuf_set_options", 00:18:28.375 "params": { 00:18:28.375 "small_pool_count": 8192, 00:18:28.375 "large_pool_count": 1024, 00:18:28.375 "small_bufsize": 8192, 00:18:28.375 "large_bufsize": 135168 00:18:28.375 } 00:18:28.375 } 00:18:28.375 ] 00:18:28.375 }, 00:18:28.375 { 00:18:28.375 "subsystem": "sock", 00:18:28.375 "config": [ 00:18:28.375 { 00:18:28.375 "method": "sock_impl_set_options", 00:18:28.375 "params": { 00:18:28.375 "impl_name": "uring", 00:18:28.375 "recv_buf_size": 2097152, 00:18:28.375 "send_buf_size": 2097152, 00:18:28.375 "enable_recv_pipe": true, 00:18:28.375 "enable_quickack": false, 00:18:28.375 "enable_placement_id": 0, 00:18:28.375 "enable_zerocopy_send_server": false, 00:18:28.375 "enable_zerocopy_send_client": false, 00:18:28.375 "zerocopy_threshold": 0, 00:18:28.375 "tls_version": 0, 00:18:28.375 "enable_ktls": false 00:18:28.375 } 00:18:28.375 }, 00:18:28.375 { 00:18:28.375 "method": "sock_impl_set_options", 00:18:28.375 "params": { 00:18:28.375 "impl_name": "posix", 00:18:28.375 "recv_buf_size": 2097152, 00:18:28.375 "send_buf_size": 2097152, 00:18:28.375 "enable_recv_pipe": true, 00:18:28.375 "enable_quickack": false, 00:18:28.375 "enable_placement_id": 0, 00:18:28.375 "enable_zerocopy_send_server": true, 00:18:28.375 "enable_zerocopy_send_client": false, 00:18:28.375 "zerocopy_threshold": 0, 00:18:28.375 "tls_version": 0, 00:18:28.375 "enable_ktls": false 00:18:28.375 } 00:18:28.375 }, 00:18:28.375 { 00:18:28.375 "method": "sock_impl_set_options", 00:18:28.375 "params": { 00:18:28.375 "impl_name": "ssl", 00:18:28.375 "recv_buf_size": 4096, 00:18:28.375 "send_buf_size": 4096, 00:18:28.375 "enable_recv_pipe": true, 00:18:28.375 "enable_quickack": false, 00:18:28.375 "enable_placement_id": 0, 00:18:28.375 "enable_zerocopy_send_server": true, 00:18:28.375 "enable_zerocopy_send_client": false, 00:18:28.375 "zerocopy_threshold": 0, 00:18:28.375 "tls_version": 0, 00:18:28.375 "enable_ktls": false 00:18:28.375 } 00:18:28.375 } 00:18:28.375 ] 00:18:28.375 }, 00:18:28.375 { 00:18:28.375 "subsystem": "vmd", 00:18:28.375 "config": [] 00:18:28.375 }, 00:18:28.375 { 00:18:28.375 "subsystem": "accel", 00:18:28.375 "config": [ 00:18:28.375 { 00:18:28.375 "method": "accel_set_options", 00:18:28.375 "params": { 00:18:28.375 "small_cache_size": 128, 00:18:28.375 "large_cache_size": 16, 00:18:28.375 "task_count": 2048, 00:18:28.375 "sequence_count": 2048, 00:18:28.375 "buf_count": 2048 00:18:28.375 } 00:18:28.375 } 00:18:28.375 ] 00:18:28.375 }, 00:18:28.375 { 00:18:28.375 "subsystem": "bdev", 00:18:28.375 "config": [ 00:18:28.375 { 00:18:28.375 "method": "bdev_set_options", 00:18:28.375 "params": { 00:18:28.375 "bdev_io_pool_size": 65535, 00:18:28.375 "bdev_io_cache_size": 256, 00:18:28.375 "bdev_auto_examine": true, 00:18:28.375 "iobuf_small_cache_size": 128, 00:18:28.375 "iobuf_large_cache_size": 16 00:18:28.375 } 00:18:28.375 }, 00:18:28.375 { 00:18:28.375 "method": "bdev_raid_set_options", 00:18:28.375 "params": { 00:18:28.375 "process_window_size_kb": 1024 00:18:28.375 } 00:18:28.375 }, 00:18:28.375 { 00:18:28.375 "method": "bdev_iscsi_set_options", 00:18:28.375 "params": { 00:18:28.375 "timeout_sec": 30 00:18:28.375 } 00:18:28.375 }, 00:18:28.375 { 00:18:28.375 "method": "bdev_nvme_set_options", 00:18:28.375 "params": { 00:18:28.375 "action_on_timeout": "none", 00:18:28.375 "timeout_us": 0, 00:18:28.375 "timeout_admin_us": 0, 00:18:28.375 "keep_alive_timeout_ms": 10000, 00:18:28.375 "arbitration_burst": 0, 00:18:28.375 "low_priority_weight": 0, 00:18:28.375 "medium_priority_weight": 0, 00:18:28.375 "high_priority_weight": 0, 00:18:28.375 "nvme_adminq_poll_period_us": 10000, 00:18:28.375 "nvme_ioq_poll_period_us": 0, 00:18:28.375 "io_queue_requests": 512, 00:18:28.375 "delay_cmd_submit": true, 00:18:28.375 "transport_retry_count": 4, 00:18:28.375 "bdev_retry_count": 3, 00:18:28.375 "transport_ack_timeout": 0, 00:18:28.375 "ctrlr_loss_timeout_sec": 0, 00:18:28.375 "reconnect_delay_sec": 0, 00:18:28.375 "fast_io_fail_timeout_sec": 0, 00:18:28.375 "disable_auto_failback": false, 00:18:28.376 "genera 20:10:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:28.376 te_uuids": false, 00:18:28.376 "transport_tos": 0, 00:18:28.376 "nvme_error_stat": false, 00:18:28.376 "rdma_srq_size": 0, 00:18:28.376 "io_path_stat": false, 00:18:28.376 "allow_accel_sequence": false, 00:18:28.376 "rdma_max_cq_size": 0, 00:18:28.376 "rdma_cm_event_timeout_ms": 0, 00:18:28.376 "dhchap_digests": [ 00:18:28.376 "sha256", 00:18:28.376 "sha384", 00:18:28.376 "sha512" 00:18:28.376 ], 00:18:28.376 "dhchap_dhgroups": [ 00:18:28.376 "null", 00:18:28.376 "ffdhe2048", 00:18:28.376 "ffdhe3072", 00:18:28.376 "ffdhe4096", 00:18:28.376 "ffdhe6144", 00:18:28.376 "ffdhe8192" 00:18:28.376 ] 00:18:28.376 } 00:18:28.376 }, 00:18:28.376 { 00:18:28.376 "method": "bdev_nvme_attach_controller", 00:18:28.376 "params": { 00:18:28.376 "name": "nvme0", 00:18:28.376 "trtype": "TCP", 00:18:28.376 "adrfam": "IPv4", 00:18:28.376 "traddr": "10.0.0.2", 00:18:28.376 "trsvcid": "4420", 00:18:28.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.376 "prchk_reftag": false, 00:18:28.376 "prchk_guard": false, 00:18:28.376 "ctrlr_loss_timeout_sec": 0, 00:18:28.376 "reconnect_delay_sec": 0, 00:18:28.376 "fast_io_fail_timeout_sec": 0, 00:18:28.376 "psk": "key0", 00:18:28.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.376 "hdgst": false, 00:18:28.376 "ddgst": false 00:18:28.376 } 00:18:28.376 }, 00:18:28.376 { 00:18:28.376 "method": "bdev_nvme_set_hotplug", 00:18:28.376 "params": { 00:18:28.376 "period_us": 100000, 00:18:28.376 "enable": false 00:18:28.376 } 00:18:28.376 }, 00:18:28.376 { 00:18:28.376 "method": "bdev_enable_histogram", 00:18:28.376 "params": { 00:18:28.376 "name": "nvme0n1", 00:18:28.376 "enable": true 00:18:28.376 } 00:18:28.376 }, 00:18:28.376 { 00:18:28.376 "method": "bdev_wait_for_examine" 00:18:28.376 } 00:18:28.376 ] 00:18:28.376 }, 00:18:28.376 { 00:18:28.376 "subsystem": "nbd", 00:18:28.376 "config": [] 00:18:28.376 } 00:18:28.376 ] 00:18:28.376 }' 00:18:28.376 20:10:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.376 20:10:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:28.376 20:10:10 -- common/autotest_common.sh@10 -- # set +x 00:18:28.376 [2024-04-24 20:10:10.547282] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:28.376 [2024-04-24 20:10:10.547361] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70980 ] 00:18:28.634 [2024-04-24 20:10:10.688082] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.634 [2024-04-24 20:10:10.784450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.892 [2024-04-24 20:10:10.940871] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.459 20:10:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:29.459 20:10:11 -- common/autotest_common.sh@850 -- # return 0 00:18:29.459 20:10:11 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:29.459 20:10:11 -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:29.459 20:10:11 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.459 20:10:11 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:29.459 Running I/O for 1 seconds... 00:18:30.836 00:18:30.836 Latency(us) 00:18:30.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.836 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.836 Verification LBA range: start 0x0 length 0x2000 00:18:30.836 nvme0n1 : 1.01 5877.67 22.96 0.00 0.00 21573.20 5780.90 18888.10 00:18:30.836 =================================================================================================================== 00:18:30.836 Total : 5877.67 22.96 0.00 0.00 21573.20 5780.90 18888.10 00:18:30.836 0 00:18:30.836 20:10:12 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:30.836 20:10:12 -- target/tls.sh@279 -- # cleanup 00:18:30.836 20:10:12 -- target/tls.sh@15 -- # process_shm --id 0 00:18:30.836 20:10:12 -- common/autotest_common.sh@794 -- # type=--id 00:18:30.836 20:10:12 -- common/autotest_common.sh@795 -- # id=0 00:18:30.836 20:10:12 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:30.836 20:10:12 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:30.836 20:10:12 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:30.836 20:10:12 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:30.836 20:10:12 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:30.836 20:10:12 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:30.836 nvmf_trace.0 00:18:30.836 20:10:12 -- common/autotest_common.sh@809 -- # return 0 00:18:30.836 20:10:12 -- target/tls.sh@16 -- # killprocess 70980 00:18:30.836 20:10:12 -- common/autotest_common.sh@936 -- # '[' -z 70980 ']' 00:18:30.836 20:10:12 -- common/autotest_common.sh@940 -- # kill -0 70980 00:18:30.836 20:10:12 -- common/autotest_common.sh@941 -- # uname 00:18:30.836 20:10:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:30.836 20:10:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70980 00:18:30.836 20:10:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:30.836 20:10:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:30.836 20:10:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70980' 00:18:30.836 killing process with pid 70980 00:18:30.836 20:10:12 -- common/autotest_common.sh@955 -- # kill 70980 00:18:30.836 Received shutdown signal, test time was about 1.000000 seconds 00:18:30.836 00:18:30.836 Latency(us) 00:18:30.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.836 =================================================================================================================== 00:18:30.836 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.836 20:10:12 -- common/autotest_common.sh@960 -- # wait 70980 00:18:30.836 20:10:13 -- target/tls.sh@17 -- # nvmftestfini 00:18:30.836 20:10:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:30.836 20:10:13 -- nvmf/common.sh@117 -- # sync 00:18:31.098 20:10:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:31.098 20:10:13 -- nvmf/common.sh@120 -- # set +e 00:18:31.098 20:10:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:31.098 20:10:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:31.098 rmmod nvme_tcp 00:18:31.098 rmmod nvme_fabrics 00:18:31.098 rmmod nvme_keyring 00:18:31.098 20:10:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:31.098 20:10:13 -- nvmf/common.sh@124 -- # set -e 00:18:31.098 20:10:13 -- nvmf/common.sh@125 -- # return 0 00:18:31.098 20:10:13 -- nvmf/common.sh@478 -- # '[' -n 70948 ']' 00:18:31.098 20:10:13 -- nvmf/common.sh@479 -- # killprocess 70948 00:18:31.098 20:10:13 -- common/autotest_common.sh@936 -- # '[' -z 70948 ']' 00:18:31.098 20:10:13 -- common/autotest_common.sh@940 -- # kill -0 70948 00:18:31.098 20:10:13 -- common/autotest_common.sh@941 -- # uname 00:18:31.098 20:10:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:31.098 20:10:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70948 00:18:31.098 20:10:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:31.098 20:10:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:31.098 killing process with pid 70948 00:18:31.098 20:10:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70948' 00:18:31.098 20:10:13 -- common/autotest_common.sh@955 -- # kill 70948 00:18:31.098 [2024-04-24 20:10:13.229355] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:31.098 20:10:13 -- common/autotest_common.sh@960 -- # wait 70948 00:18:31.358 20:10:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:31.358 20:10:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:31.358 20:10:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:31.358 20:10:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.358 20:10:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:31.358 20:10:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.358 20:10:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.358 20:10:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.358 20:10:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:31.358 20:10:13 -- target/tls.sh@18 -- # rm -f /tmp/tmp.VwFKAdLc2p /tmp/tmp.2BJV3CYPe7 /tmp/tmp.iZTqSWOR1x 00:18:31.358 00:18:31.358 real 1m22.624s 00:18:31.358 user 2m10.561s 00:18:31.358 sys 0m25.785s 00:18:31.358 20:10:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:31.358 20:10:13 -- common/autotest_common.sh@10 -- # set +x 00:18:31.358 ************************************ 00:18:31.358 END TEST nvmf_tls 00:18:31.358 ************************************ 00:18:31.358 20:10:13 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:31.358 20:10:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:31.358 20:10:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:31.358 20:10:13 -- common/autotest_common.sh@10 -- # set +x 00:18:31.618 ************************************ 00:18:31.618 START TEST nvmf_fips 00:18:31.618 ************************************ 00:18:31.618 20:10:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:31.618 * Looking for test storage... 00:18:31.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:31.618 20:10:13 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:31.618 20:10:13 -- nvmf/common.sh@7 -- # uname -s 00:18:31.618 20:10:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.618 20:10:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.618 20:10:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.618 20:10:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.618 20:10:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.618 20:10:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.618 20:10:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.618 20:10:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.618 20:10:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.618 20:10:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.618 20:10:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:18:31.618 20:10:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:18:31.618 20:10:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.618 20:10:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.618 20:10:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:31.618 20:10:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.618 20:10:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:31.618 20:10:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.618 20:10:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.618 20:10:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.618 20:10:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.618 20:10:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.619 20:10:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.619 20:10:13 -- paths/export.sh@5 -- # export PATH 00:18:31.619 20:10:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.619 20:10:13 -- nvmf/common.sh@47 -- # : 0 00:18:31.619 20:10:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.619 20:10:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.619 20:10:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.619 20:10:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.619 20:10:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.619 20:10:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.619 20:10:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.619 20:10:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.619 20:10:13 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:31.619 20:10:13 -- fips/fips.sh@89 -- # check_openssl_version 00:18:31.619 20:10:13 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:31.619 20:10:13 -- fips/fips.sh@85 -- # openssl version 00:18:31.619 20:10:13 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:31.879 20:10:13 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:31.879 20:10:13 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:31.879 20:10:13 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:31.879 20:10:13 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:31.879 20:10:13 -- scripts/common.sh@333 -- # IFS=.-: 00:18:31.879 20:10:13 -- scripts/common.sh@333 -- # read -ra ver1 00:18:31.879 20:10:13 -- scripts/common.sh@334 -- # IFS=.-: 00:18:31.879 20:10:13 -- scripts/common.sh@334 -- # read -ra ver2 00:18:31.879 20:10:13 -- scripts/common.sh@335 -- # local 'op=>=' 00:18:31.879 20:10:13 -- scripts/common.sh@337 -- # ver1_l=3 00:18:31.879 20:10:13 -- scripts/common.sh@338 -- # ver2_l=3 00:18:31.879 20:10:13 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:31.879 20:10:13 -- scripts/common.sh@341 -- # case "$op" in 00:18:31.879 20:10:13 -- scripts/common.sh@345 -- # : 1 00:18:31.879 20:10:13 -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:31.879 20:10:13 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.879 20:10:13 -- scripts/common.sh@362 -- # decimal 3 00:18:31.879 20:10:13 -- scripts/common.sh@350 -- # local d=3 00:18:31.879 20:10:13 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:31.879 20:10:13 -- scripts/common.sh@352 -- # echo 3 00:18:31.879 20:10:13 -- scripts/common.sh@362 -- # ver1[v]=3 00:18:31.879 20:10:13 -- scripts/common.sh@363 -- # decimal 3 00:18:31.879 20:10:13 -- scripts/common.sh@350 -- # local d=3 00:18:31.879 20:10:13 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:31.879 20:10:13 -- scripts/common.sh@352 -- # echo 3 00:18:31.879 20:10:13 -- scripts/common.sh@363 -- # ver2[v]=3 00:18:31.879 20:10:13 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:31.879 20:10:13 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:31.879 20:10:13 -- scripts/common.sh@361 -- # (( v++ )) 00:18:31.879 20:10:13 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.879 20:10:13 -- scripts/common.sh@362 -- # decimal 0 00:18:31.879 20:10:13 -- scripts/common.sh@350 -- # local d=0 00:18:31.879 20:10:13 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:31.879 20:10:13 -- scripts/common.sh@352 -- # echo 0 00:18:31.879 20:10:13 -- scripts/common.sh@362 -- # ver1[v]=0 00:18:31.879 20:10:13 -- scripts/common.sh@363 -- # decimal 0 00:18:31.879 20:10:13 -- scripts/common.sh@350 -- # local d=0 00:18:31.879 20:10:13 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:31.879 20:10:13 -- scripts/common.sh@352 -- # echo 0 00:18:31.879 20:10:13 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:31.879 20:10:13 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:31.879 20:10:13 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:31.879 20:10:13 -- scripts/common.sh@361 -- # (( v++ )) 00:18:31.879 20:10:13 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.879 20:10:13 -- scripts/common.sh@362 -- # decimal 9 00:18:31.879 20:10:13 -- scripts/common.sh@350 -- # local d=9 00:18:31.879 20:10:13 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:31.879 20:10:13 -- scripts/common.sh@352 -- # echo 9 00:18:31.879 20:10:13 -- scripts/common.sh@362 -- # ver1[v]=9 00:18:31.879 20:10:13 -- scripts/common.sh@363 -- # decimal 0 00:18:31.879 20:10:13 -- scripts/common.sh@350 -- # local d=0 00:18:31.879 20:10:13 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:31.879 20:10:13 -- scripts/common.sh@352 -- # echo 0 00:18:31.879 20:10:13 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:31.879 20:10:13 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:31.879 20:10:13 -- scripts/common.sh@364 -- # return 0 00:18:31.879 20:10:13 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:31.879 20:10:13 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:31.879 20:10:13 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:31.879 20:10:13 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:31.879 20:10:13 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:31.879 20:10:13 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:31.879 20:10:13 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:31.879 20:10:13 -- fips/fips.sh@113 -- # build_openssl_config 00:18:31.879 20:10:13 -- fips/fips.sh@37 -- # cat 00:18:31.879 20:10:13 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:31.879 20:10:13 -- fips/fips.sh@58 -- # cat - 00:18:31.879 20:10:13 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:31.879 20:10:13 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:31.879 20:10:13 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:31.879 20:10:13 -- fips/fips.sh@116 -- # openssl list -providers 00:18:31.879 20:10:13 -- fips/fips.sh@116 -- # grep name 00:18:31.879 20:10:14 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:31.879 20:10:14 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:31.879 20:10:14 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:31.879 20:10:14 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:31.879 20:10:14 -- common/autotest_common.sh@638 -- # local es=0 00:18:31.879 20:10:14 -- fips/fips.sh@127 -- # : 00:18:31.879 20:10:14 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:31.879 20:10:14 -- common/autotest_common.sh@626 -- # local arg=openssl 00:18:31.879 20:10:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:31.879 20:10:14 -- common/autotest_common.sh@630 -- # type -t openssl 00:18:31.879 20:10:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:31.879 20:10:14 -- common/autotest_common.sh@632 -- # type -P openssl 00:18:31.879 20:10:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:31.879 20:10:14 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:18:31.879 20:10:14 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:18:31.879 20:10:14 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:18:31.879 Error setting digest 00:18:31.879 0082BB9AA27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:31.879 0082BB9AA27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:31.879 20:10:14 -- common/autotest_common.sh@641 -- # es=1 00:18:31.879 20:10:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:31.879 20:10:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:31.879 20:10:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:31.879 20:10:14 -- fips/fips.sh@130 -- # nvmftestinit 00:18:31.879 20:10:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:31.879 20:10:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.879 20:10:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:31.879 20:10:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:31.879 20:10:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:31.879 20:10:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.879 20:10:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.879 20:10:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.879 20:10:14 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:31.879 20:10:14 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:31.879 20:10:14 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:31.879 20:10:14 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:31.879 20:10:14 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:31.879 20:10:14 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:31.879 20:10:14 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.879 20:10:14 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.879 20:10:14 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:31.879 20:10:14 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:31.879 20:10:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:31.879 20:10:14 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:31.879 20:10:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:31.879 20:10:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.879 20:10:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:31.879 20:10:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:31.879 20:10:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:31.879 20:10:14 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:31.879 20:10:14 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:31.879 20:10:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:32.138 Cannot find device "nvmf_tgt_br" 00:18:32.138 20:10:14 -- nvmf/common.sh@155 -- # true 00:18:32.138 20:10:14 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:32.138 Cannot find device "nvmf_tgt_br2" 00:18:32.138 20:10:14 -- nvmf/common.sh@156 -- # true 00:18:32.138 20:10:14 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:32.138 20:10:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:32.138 Cannot find device "nvmf_tgt_br" 00:18:32.138 20:10:14 -- nvmf/common.sh@158 -- # true 00:18:32.138 20:10:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:32.138 Cannot find device "nvmf_tgt_br2" 00:18:32.138 20:10:14 -- nvmf/common.sh@159 -- # true 00:18:32.138 20:10:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:32.138 20:10:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:32.138 20:10:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:32.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:32.138 20:10:14 -- nvmf/common.sh@162 -- # true 00:18:32.138 20:10:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:32.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:32.138 20:10:14 -- nvmf/common.sh@163 -- # true 00:18:32.138 20:10:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:32.138 20:10:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:32.138 20:10:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:32.138 20:10:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:32.138 20:10:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:32.139 20:10:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:32.139 20:10:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:32.139 20:10:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:32.139 20:10:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:32.139 20:10:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:32.139 20:10:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:32.139 20:10:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:32.139 20:10:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:32.139 20:10:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:32.139 20:10:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:32.139 20:10:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:32.139 20:10:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:32.139 20:10:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:32.139 20:10:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:32.139 20:10:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:32.409 20:10:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:32.409 20:10:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:32.409 20:10:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:32.409 20:10:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:32.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:18:32.409 00:18:32.409 --- 10.0.0.2 ping statistics --- 00:18:32.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.409 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:32.409 20:10:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:32.409 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:32.409 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:18:32.409 00:18:32.409 --- 10.0.0.3 ping statistics --- 00:18:32.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.409 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:32.409 20:10:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:32.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:32.409 00:18:32.409 --- 10.0.0.1 ping statistics --- 00:18:32.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.409 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:32.409 20:10:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.409 20:10:14 -- nvmf/common.sh@422 -- # return 0 00:18:32.409 20:10:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:32.409 20:10:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.409 20:10:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:32.409 20:10:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:32.410 20:10:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.410 20:10:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:32.410 20:10:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:32.410 20:10:14 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:32.410 20:10:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:32.410 20:10:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:32.410 20:10:14 -- common/autotest_common.sh@10 -- # set +x 00:18:32.410 20:10:14 -- nvmf/common.sh@470 -- # nvmfpid=71248 00:18:32.410 20:10:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:32.410 20:10:14 -- nvmf/common.sh@471 -- # waitforlisten 71248 00:18:32.410 20:10:14 -- common/autotest_common.sh@817 -- # '[' -z 71248 ']' 00:18:32.410 20:10:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.410 20:10:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:32.410 20:10:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.410 20:10:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:32.410 20:10:14 -- common/autotest_common.sh@10 -- # set +x 00:18:32.410 [2024-04-24 20:10:14.528446] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:32.410 [2024-04-24 20:10:14.528515] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.679 [2024-04-24 20:10:14.669402] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.679 [2024-04-24 20:10:14.757544] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.679 [2024-04-24 20:10:14.757602] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.679 [2024-04-24 20:10:14.757625] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.679 [2024-04-24 20:10:14.757630] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.679 [2024-04-24 20:10:14.757634] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.679 [2024-04-24 20:10:14.757655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.249 20:10:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:33.249 20:10:15 -- common/autotest_common.sh@850 -- # return 0 00:18:33.249 20:10:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:33.249 20:10:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:33.249 20:10:15 -- common/autotest_common.sh@10 -- # set +x 00:18:33.249 20:10:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.249 20:10:15 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:33.249 20:10:15 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:33.249 20:10:15 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:33.249 20:10:15 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:33.249 20:10:15 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:33.249 20:10:15 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:33.249 20:10:15 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:33.249 20:10:15 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:33.508 [2024-04-24 20:10:15.591512] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.508 [2024-04-24 20:10:15.607371] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:33.508 [2024-04-24 20:10:15.607433] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:33.508 [2024-04-24 20:10:15.607581] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.508 [2024-04-24 20:10:15.635687] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:33.508 malloc0 00:18:33.508 20:10:15 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:33.508 20:10:15 -- fips/fips.sh@147 -- # bdevperf_pid=71289 00:18:33.508 20:10:15 -- fips/fips.sh@148 -- # waitforlisten 71289 /var/tmp/bdevperf.sock 00:18:33.508 20:10:15 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:33.508 20:10:15 -- common/autotest_common.sh@817 -- # '[' -z 71289 ']' 00:18:33.508 20:10:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.508 20:10:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:33.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.508 20:10:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.508 20:10:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:33.508 20:10:15 -- common/autotest_common.sh@10 -- # set +x 00:18:33.508 [2024-04-24 20:10:15.742734] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:33.508 [2024-04-24 20:10:15.742817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71289 ] 00:18:33.768 [2024-04-24 20:10:15.880099] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.768 [2024-04-24 20:10:15.963390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.706 20:10:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:34.706 20:10:16 -- common/autotest_common.sh@850 -- # return 0 00:18:34.706 20:10:16 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:34.706 [2024-04-24 20:10:16.760663] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:34.706 [2024-04-24 20:10:16.760759] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:34.706 TLSTESTn1 00:18:34.706 20:10:16 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:34.965 Running I/O for 10 seconds... 00:18:44.966 00:18:44.966 Latency(us) 00:18:44.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.966 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:44.966 Verification LBA range: start 0x0 length 0x2000 00:18:44.966 TLSTESTn1 : 10.01 5784.08 22.59 0.00 0.00 22092.63 4722.03 16713.11 00:18:44.966 =================================================================================================================== 00:18:44.966 Total : 5784.08 22.59 0.00 0.00 22092.63 4722.03 16713.11 00:18:44.966 0 00:18:44.966 20:10:26 -- fips/fips.sh@1 -- # cleanup 00:18:44.966 20:10:26 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:44.966 20:10:26 -- common/autotest_common.sh@794 -- # type=--id 00:18:44.966 20:10:26 -- common/autotest_common.sh@795 -- # id=0 00:18:44.966 20:10:26 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:44.966 20:10:26 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:44.966 20:10:26 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:44.966 20:10:26 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:44.966 20:10:26 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:44.966 20:10:26 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:44.966 nvmf_trace.0 00:18:44.966 20:10:27 -- common/autotest_common.sh@809 -- # return 0 00:18:44.966 20:10:27 -- fips/fips.sh@16 -- # killprocess 71289 00:18:44.966 20:10:27 -- common/autotest_common.sh@936 -- # '[' -z 71289 ']' 00:18:44.966 20:10:27 -- common/autotest_common.sh@940 -- # kill -0 71289 00:18:44.966 20:10:27 -- common/autotest_common.sh@941 -- # uname 00:18:44.966 20:10:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:44.966 20:10:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71289 00:18:44.966 20:10:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:44.966 20:10:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:44.966 killing process with pid 71289 00:18:44.966 20:10:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71289' 00:18:44.966 20:10:27 -- common/autotest_common.sh@955 -- # kill 71289 00:18:44.966 Received shutdown signal, test time was about 10.000000 seconds 00:18:44.966 00:18:44.966 Latency(us) 00:18:44.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.966 =================================================================================================================== 00:18:44.966 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:44.966 [2024-04-24 20:10:27.103933] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:44.966 20:10:27 -- common/autotest_common.sh@960 -- # wait 71289 00:18:45.226 20:10:27 -- fips/fips.sh@17 -- # nvmftestfini 00:18:45.226 20:10:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:45.226 20:10:27 -- nvmf/common.sh@117 -- # sync 00:18:45.226 20:10:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:45.226 20:10:27 -- nvmf/common.sh@120 -- # set +e 00:18:45.226 20:10:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.226 20:10:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:45.226 rmmod nvme_tcp 00:18:45.226 rmmod nvme_fabrics 00:18:45.226 rmmod nvme_keyring 00:18:45.226 20:10:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.226 20:10:27 -- nvmf/common.sh@124 -- # set -e 00:18:45.226 20:10:27 -- nvmf/common.sh@125 -- # return 0 00:18:45.226 20:10:27 -- nvmf/common.sh@478 -- # '[' -n 71248 ']' 00:18:45.226 20:10:27 -- nvmf/common.sh@479 -- # killprocess 71248 00:18:45.226 20:10:27 -- common/autotest_common.sh@936 -- # '[' -z 71248 ']' 00:18:45.226 20:10:27 -- common/autotest_common.sh@940 -- # kill -0 71248 00:18:45.226 20:10:27 -- common/autotest_common.sh@941 -- # uname 00:18:45.226 20:10:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:45.226 20:10:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71248 00:18:45.226 20:10:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:45.226 20:10:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:45.226 20:10:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71248' 00:18:45.226 killing process with pid 71248 00:18:45.226 20:10:27 -- common/autotest_common.sh@955 -- # kill 71248 00:18:45.226 [2024-04-24 20:10:27.468126] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:45.226 [2024-04-24 20:10:27.468181] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:45.226 20:10:27 -- common/autotest_common.sh@960 -- # wait 71248 00:18:45.485 20:10:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:45.485 20:10:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:45.485 20:10:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:45.485 20:10:27 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:45.485 20:10:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:45.485 20:10:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.485 20:10:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.485 20:10:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.744 20:10:27 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:45.744 20:10:27 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:45.744 ************************************ 00:18:45.744 END TEST nvmf_fips 00:18:45.744 ************************************ 00:18:45.744 00:18:45.744 real 0m14.092s 00:18:45.744 user 0m19.478s 00:18:45.744 sys 0m5.274s 00:18:45.744 20:10:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:45.744 20:10:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.744 20:10:27 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:18:45.744 20:10:27 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:18:45.744 20:10:27 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:18:45.744 20:10:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:45.744 20:10:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.744 20:10:27 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:18:45.744 20:10:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:45.744 20:10:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.744 20:10:27 -- nvmf/nvmf.sh@88 -- # [[ 1 -eq 0 ]] 00:18:45.744 20:10:27 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:45.744 20:10:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:45.744 20:10:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:45.744 20:10:27 -- common/autotest_common.sh@10 -- # set +x 00:18:45.744 ************************************ 00:18:45.744 START TEST nvmf_identify 00:18:45.744 ************************************ 00:18:45.744 20:10:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:46.004 * Looking for test storage... 00:18:46.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:46.004 20:10:28 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:46.004 20:10:28 -- nvmf/common.sh@7 -- # uname -s 00:18:46.004 20:10:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.004 20:10:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.004 20:10:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.004 20:10:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.004 20:10:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.004 20:10:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.004 20:10:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.004 20:10:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.004 20:10:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.004 20:10:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.004 20:10:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:18:46.004 20:10:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:18:46.004 20:10:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.004 20:10:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.004 20:10:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:46.004 20:10:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.004 20:10:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:46.004 20:10:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.004 20:10:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.004 20:10:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.004 20:10:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.004 20:10:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.004 20:10:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.004 20:10:28 -- paths/export.sh@5 -- # export PATH 00:18:46.004 20:10:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.004 20:10:28 -- nvmf/common.sh@47 -- # : 0 00:18:46.004 20:10:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.004 20:10:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.004 20:10:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.004 20:10:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.004 20:10:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.004 20:10:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.004 20:10:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.004 20:10:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.004 20:10:28 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.004 20:10:28 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.004 20:10:28 -- host/identify.sh@14 -- # nvmftestinit 00:18:46.004 20:10:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:46.004 20:10:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.004 20:10:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:46.004 20:10:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:46.004 20:10:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:46.004 20:10:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.004 20:10:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.004 20:10:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.004 20:10:28 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:46.004 20:10:28 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:46.004 20:10:28 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:46.004 20:10:28 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:46.004 20:10:28 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:46.004 20:10:28 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:46.004 20:10:28 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.004 20:10:28 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.004 20:10:28 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:46.004 20:10:28 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:46.004 20:10:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:46.004 20:10:28 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:46.004 20:10:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:46.004 20:10:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.004 20:10:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:46.004 20:10:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:46.004 20:10:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:46.004 20:10:28 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:46.004 20:10:28 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:46.004 20:10:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:46.004 Cannot find device "nvmf_tgt_br" 00:18:46.004 20:10:28 -- nvmf/common.sh@155 -- # true 00:18:46.004 20:10:28 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:46.004 Cannot find device "nvmf_tgt_br2" 00:18:46.004 20:10:28 -- nvmf/common.sh@156 -- # true 00:18:46.004 20:10:28 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:46.004 20:10:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:46.004 Cannot find device "nvmf_tgt_br" 00:18:46.004 20:10:28 -- nvmf/common.sh@158 -- # true 00:18:46.004 20:10:28 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:46.004 Cannot find device "nvmf_tgt_br2" 00:18:46.004 20:10:28 -- nvmf/common.sh@159 -- # true 00:18:46.004 20:10:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:46.264 20:10:28 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:46.264 20:10:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:46.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:46.264 20:10:28 -- nvmf/common.sh@162 -- # true 00:18:46.264 20:10:28 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:46.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:46.264 20:10:28 -- nvmf/common.sh@163 -- # true 00:18:46.264 20:10:28 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:46.264 20:10:28 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:46.264 20:10:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:46.264 20:10:28 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:46.264 20:10:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:46.264 20:10:28 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:46.264 20:10:28 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:46.264 20:10:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:46.264 20:10:28 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:46.264 20:10:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:46.264 20:10:28 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:46.264 20:10:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:46.264 20:10:28 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:46.264 20:10:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:46.264 20:10:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:46.264 20:10:28 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:46.264 20:10:28 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:46.264 20:10:28 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:46.264 20:10:28 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:46.264 20:10:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:46.264 20:10:28 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:46.264 20:10:28 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:46.264 20:10:28 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:46.264 20:10:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:46.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:18:46.264 00:18:46.264 --- 10.0.0.2 ping statistics --- 00:18:46.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.264 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:46.264 20:10:28 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:46.264 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:46.264 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:18:46.264 00:18:46.264 --- 10.0.0.3 ping statistics --- 00:18:46.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.264 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:46.264 20:10:28 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:46.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:18:46.264 00:18:46.264 --- 10.0.0.1 ping statistics --- 00:18:46.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.264 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:18:46.264 20:10:28 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.264 20:10:28 -- nvmf/common.sh@422 -- # return 0 00:18:46.264 20:10:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:46.264 20:10:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.264 20:10:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:46.264 20:10:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:46.264 20:10:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.264 20:10:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:46.264 20:10:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:46.264 20:10:28 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:18:46.264 20:10:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:46.264 20:10:28 -- common/autotest_common.sh@10 -- # set +x 00:18:46.264 20:10:28 -- host/identify.sh@19 -- # nvmfpid=71631 00:18:46.264 20:10:28 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:46.264 20:10:28 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:46.264 20:10:28 -- host/identify.sh@23 -- # waitforlisten 71631 00:18:46.264 20:10:28 -- common/autotest_common.sh@817 -- # '[' -z 71631 ']' 00:18:46.264 20:10:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.264 20:10:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:46.264 20:10:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.264 20:10:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:46.264 20:10:28 -- common/autotest_common.sh@10 -- # set +x 00:18:46.524 [2024-04-24 20:10:28.534068] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:46.524 [2024-04-24 20:10:28.534131] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.524 [2024-04-24 20:10:28.674670] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:46.524 [2024-04-24 20:10:28.769947] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.524 [2024-04-24 20:10:28.770005] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.524 [2024-04-24 20:10:28.770028] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.524 [2024-04-24 20:10:28.770033] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.524 [2024-04-24 20:10:28.770037] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.524 [2024-04-24 20:10:28.770279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.524 [2024-04-24 20:10:28.770506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.524 [2024-04-24 20:10:28.770466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.524 [2024-04-24 20:10:28.770510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:47.462 20:10:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:47.462 20:10:29 -- common/autotest_common.sh@850 -- # return 0 00:18:47.462 20:10:29 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:47.462 20:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.462 20:10:29 -- common/autotest_common.sh@10 -- # set +x 00:18:47.462 [2024-04-24 20:10:29.390851] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.462 20:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.462 20:10:29 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:18:47.462 20:10:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:47.462 20:10:29 -- common/autotest_common.sh@10 -- # set +x 00:18:47.462 20:10:29 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:47.462 20:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.462 20:10:29 -- common/autotest_common.sh@10 -- # set +x 00:18:47.462 Malloc0 00:18:47.462 20:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.462 20:10:29 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:47.462 20:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.462 20:10:29 -- common/autotest_common.sh@10 -- # set +x 00:18:47.462 20:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.462 20:10:29 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:18:47.462 20:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.462 20:10:29 -- common/autotest_common.sh@10 -- # set +x 00:18:47.462 20:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.462 20:10:29 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:47.462 20:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.462 20:10:29 -- common/autotest_common.sh@10 -- # set +x 00:18:47.462 [2024-04-24 20:10:29.504905] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:47.462 [2024-04-24 20:10:29.505143] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.462 20:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.462 20:10:29 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:47.462 20:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.462 20:10:29 -- common/autotest_common.sh@10 -- # set +x 00:18:47.462 20:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.462 20:10:29 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:18:47.462 20:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.462 20:10:29 -- common/autotest_common.sh@10 -- # set +x 00:18:47.462 [2024-04-24 20:10:29.528853] nvmf_rpc.c: 276:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:18:47.462 [ 00:18:47.462 { 00:18:47.462 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:47.462 "subtype": "Discovery", 00:18:47.462 "listen_addresses": [ 00:18:47.462 { 00:18:47.462 "transport": "TCP", 00:18:47.462 "trtype": "TCP", 00:18:47.462 "adrfam": "IPv4", 00:18:47.462 "traddr": "10.0.0.2", 00:18:47.462 "trsvcid": "4420" 00:18:47.462 } 00:18:47.462 ], 00:18:47.462 "allow_any_host": true, 00:18:47.462 "hosts": [] 00:18:47.462 }, 00:18:47.462 { 00:18:47.462 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.462 "subtype": "NVMe", 00:18:47.462 "listen_addresses": [ 00:18:47.462 { 00:18:47.462 "transport": "TCP", 00:18:47.462 "trtype": "TCP", 00:18:47.462 "adrfam": "IPv4", 00:18:47.462 "traddr": "10.0.0.2", 00:18:47.462 "trsvcid": "4420" 00:18:47.462 } 00:18:47.462 ], 00:18:47.462 "allow_any_host": true, 00:18:47.462 "hosts": [], 00:18:47.462 "serial_number": "SPDK00000000000001", 00:18:47.462 "model_number": "SPDK bdev Controller", 00:18:47.462 "max_namespaces": 32, 00:18:47.462 "min_cntlid": 1, 00:18:47.462 "max_cntlid": 65519, 00:18:47.462 "namespaces": [ 00:18:47.462 { 00:18:47.462 "nsid": 1, 00:18:47.462 "bdev_name": "Malloc0", 00:18:47.462 "name": "Malloc0", 00:18:47.462 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:18:47.462 "eui64": "ABCDEF0123456789", 00:18:47.462 "uuid": "e1a70880-be53-4738-a7ed-08443a8d4258" 00:18:47.462 } 00:18:47.462 ] 00:18:47.462 } 00:18:47.462 ] 00:18:47.462 20:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.462 20:10:29 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:18:47.462 [2024-04-24 20:10:29.561156] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:47.462 [2024-04-24 20:10:29.561198] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71673 ] 00:18:47.462 [2024-04-24 20:10:29.697887] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:18:47.462 [2024-04-24 20:10:29.697945] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:47.462 [2024-04-24 20:10:29.697950] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:47.462 [2024-04-24 20:10:29.697962] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:47.462 [2024-04-24 20:10:29.697974] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:18:47.462 [2024-04-24 20:10:29.698094] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:18:47.462 [2024-04-24 20:10:29.698133] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x4d5300 0 00:18:47.462 [2024-04-24 20:10:29.712426] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:47.462 [2024-04-24 20:10:29.712458] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:47.462 [2024-04-24 20:10:29.712463] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:47.462 [2024-04-24 20:10:29.712467] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:47.462 [2024-04-24 20:10:29.712521] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.462 [2024-04-24 20:10:29.712529] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.462 [2024-04-24 20:10:29.712534] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d5300) 00:18:47.462 [2024-04-24 20:10:29.712552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:47.462 [2024-04-24 20:10:29.712586] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51d9c0, cid 0, qid 0 00:18:47.733 [2024-04-24 20:10:29.720431] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.733 [2024-04-24 20:10:29.720450] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.733 [2024-04-24 20:10:29.720453] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.720457] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51d9c0) on tqpair=0x4d5300 00:18:47.733 [2024-04-24 20:10:29.720467] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:47.733 [2024-04-24 20:10:29.720490] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:18:47.733 [2024-04-24 20:10:29.720495] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:18:47.733 [2024-04-24 20:10:29.720511] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.720514] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.720517] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d5300) 00:18:47.733 [2024-04-24 20:10:29.720525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.733 [2024-04-24 20:10:29.720549] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51d9c0, cid 0, qid 0 00:18:47.733 [2024-04-24 20:10:29.720600] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.733 [2024-04-24 20:10:29.720605] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.733 [2024-04-24 20:10:29.720608] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.720611] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51d9c0) on tqpair=0x4d5300 00:18:47.733 [2024-04-24 20:10:29.720618] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:18:47.733 [2024-04-24 20:10:29.720624] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:18:47.733 [2024-04-24 20:10:29.720629] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.720632] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.720635] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d5300) 00:18:47.733 [2024-04-24 20:10:29.720641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.733 [2024-04-24 20:10:29.720654] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51d9c0, cid 0, qid 0 00:18:47.733 [2024-04-24 20:10:29.720696] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.733 [2024-04-24 20:10:29.720701] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.733 [2024-04-24 20:10:29.720704] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.720706] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51d9c0) on tqpair=0x4d5300 00:18:47.733 [2024-04-24 20:10:29.720711] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:18:47.733 [2024-04-24 20:10:29.720717] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:18:47.733 [2024-04-24 20:10:29.720722] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.720725] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.720727] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d5300) 00:18:47.733 [2024-04-24 20:10:29.720733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.733 [2024-04-24 20:10:29.720745] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51d9c0, cid 0, qid 0 00:18:47.733 [2024-04-24 20:10:29.720789] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.733 [2024-04-24 20:10:29.720795] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.733 [2024-04-24 20:10:29.720797] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.720800] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51d9c0) on tqpair=0x4d5300 00:18:47.733 [2024-04-24 20:10:29.720805] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:47.733 [2024-04-24 20:10:29.720811] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.720815] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.720817] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d5300) 00:18:47.733 [2024-04-24 20:10:29.720823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.733 [2024-04-24 20:10:29.720834] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51d9c0, cid 0, qid 0 00:18:47.733 [2024-04-24 20:10:29.720898] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.733 [2024-04-24 20:10:29.720904] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.733 [2024-04-24 20:10:29.720907] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.720910] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51d9c0) on tqpair=0x4d5300 00:18:47.733 [2024-04-24 20:10:29.720914] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:18:47.733 [2024-04-24 20:10:29.720918] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:18:47.733 [2024-04-24 20:10:29.720925] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:47.733 [2024-04-24 20:10:29.721029] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:18:47.733 [2024-04-24 20:10:29.721038] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:47.733 [2024-04-24 20:10:29.721045] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.721048] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.721051] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d5300) 00:18:47.733 [2024-04-24 20:10:29.721056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.733 [2024-04-24 20:10:29.721069] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51d9c0, cid 0, qid 0 00:18:47.733 [2024-04-24 20:10:29.721124] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.733 [2024-04-24 20:10:29.721129] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.733 [2024-04-24 20:10:29.721132] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.721134] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51d9c0) on tqpair=0x4d5300 00:18:47.733 [2024-04-24 20:10:29.721138] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:47.733 [2024-04-24 20:10:29.721145] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.721148] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.721150] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d5300) 00:18:47.733 [2024-04-24 20:10:29.721155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.733 [2024-04-24 20:10:29.721166] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51d9c0, cid 0, qid 0 00:18:47.733 [2024-04-24 20:10:29.721217] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.733 [2024-04-24 20:10:29.721222] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.733 [2024-04-24 20:10:29.721224] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.733 [2024-04-24 20:10:29.721227] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51d9c0) on tqpair=0x4d5300 00:18:47.733 [2024-04-24 20:10:29.721230] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:47.733 [2024-04-24 20:10:29.721234] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:18:47.734 [2024-04-24 20:10:29.721239] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:18:47.734 [2024-04-24 20:10:29.721246] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:18:47.734 [2024-04-24 20:10:29.721254] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721256] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d5300) 00:18:47.734 [2024-04-24 20:10:29.721262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.734 [2024-04-24 20:10:29.721273] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51d9c0, cid 0, qid 0 00:18:47.734 [2024-04-24 20:10:29.721369] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:47.734 [2024-04-24 20:10:29.721390] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:47.734 [2024-04-24 20:10:29.721393] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721396] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4d5300): datao=0, datal=4096, cccid=0 00:18:47.734 [2024-04-24 20:10:29.721399] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x51d9c0) on tqpair(0x4d5300): expected_datao=0, payload_size=4096 00:18:47.734 [2024-04-24 20:10:29.721403] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721409] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721413] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721420] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.734 [2024-04-24 20:10:29.721445] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.734 [2024-04-24 20:10:29.721448] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721451] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51d9c0) on tqpair=0x4d5300 00:18:47.734 [2024-04-24 20:10:29.721459] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:18:47.734 [2024-04-24 20:10:29.721463] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:18:47.734 [2024-04-24 20:10:29.721466] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:18:47.734 [2024-04-24 20:10:29.721473] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:18:47.734 [2024-04-24 20:10:29.721477] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:18:47.734 [2024-04-24 20:10:29.721481] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:18:47.734 [2024-04-24 20:10:29.721487] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:18:47.734 [2024-04-24 20:10:29.721493] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721496] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721498] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d5300) 00:18:47.734 [2024-04-24 20:10:29.721505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:47.734 [2024-04-24 20:10:29.721518] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51d9c0, cid 0, qid 0 00:18:47.734 [2024-04-24 20:10:29.721580] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.734 [2024-04-24 20:10:29.721585] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.734 [2024-04-24 20:10:29.721588] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721591] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51d9c0) on tqpair=0x4d5300 00:18:47.734 [2024-04-24 20:10:29.721597] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721600] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721603] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4d5300) 00:18:47.734 [2024-04-24 20:10:29.721608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.734 [2024-04-24 20:10:29.721613] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721615] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721618] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x4d5300) 00:18:47.734 [2024-04-24 20:10:29.721623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.734 [2024-04-24 20:10:29.721627] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721630] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721633] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x4d5300) 00:18:47.734 [2024-04-24 20:10:29.721638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.734 [2024-04-24 20:10:29.721643] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721645] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721648] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.734 [2024-04-24 20:10:29.721653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.734 [2024-04-24 20:10:29.721656] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:18:47.734 [2024-04-24 20:10:29.721665] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:47.734 [2024-04-24 20:10:29.721670] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721672] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4d5300) 00:18:47.734 [2024-04-24 20:10:29.721678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.734 [2024-04-24 20:10:29.721703] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51d9c0, cid 0, qid 0 00:18:47.734 [2024-04-24 20:10:29.721708] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51db20, cid 1, qid 0 00:18:47.734 [2024-04-24 20:10:29.721711] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dc80, cid 2, qid 0 00:18:47.734 [2024-04-24 20:10:29.721715] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.734 [2024-04-24 20:10:29.721718] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51df40, cid 4, qid 0 00:18:47.734 [2024-04-24 20:10:29.721832] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.734 [2024-04-24 20:10:29.721845] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.734 [2024-04-24 20:10:29.721847] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721850] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51df40) on tqpair=0x4d5300 00:18:47.734 [2024-04-24 20:10:29.721854] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:18:47.734 [2024-04-24 20:10:29.721858] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:18:47.734 [2024-04-24 20:10:29.721866] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721869] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4d5300) 00:18:47.734 [2024-04-24 20:10:29.721874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.734 [2024-04-24 20:10:29.721886] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51df40, cid 4, qid 0 00:18:47.734 [2024-04-24 20:10:29.721930] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:47.734 [2024-04-24 20:10:29.721935] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:47.734 [2024-04-24 20:10:29.721937] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721940] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4d5300): datao=0, datal=4096, cccid=4 00:18:47.734 [2024-04-24 20:10:29.721943] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x51df40) on tqpair(0x4d5300): expected_datao=0, payload_size=4096 00:18:47.734 [2024-04-24 20:10:29.721946] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721952] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721955] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721961] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.734 [2024-04-24 20:10:29.721965] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.734 [2024-04-24 20:10:29.721968] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.721970] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51df40) on tqpair=0x4d5300 00:18:47.734 [2024-04-24 20:10:29.721979] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:18:47.734 [2024-04-24 20:10:29.721996] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.722000] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4d5300) 00:18:47.734 [2024-04-24 20:10:29.722005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.734 [2024-04-24 20:10:29.722010] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.722013] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.722015] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4d5300) 00:18:47.734 [2024-04-24 20:10:29.722020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.734 [2024-04-24 20:10:29.722037] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51df40, cid 4, qid 0 00:18:47.734 [2024-04-24 20:10:29.722042] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51e0a0, cid 5, qid 0 00:18:47.734 [2024-04-24 20:10:29.722156] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:47.734 [2024-04-24 20:10:29.722174] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:47.734 [2024-04-24 20:10:29.722176] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:47.734 [2024-04-24 20:10:29.722179] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4d5300): datao=0, datal=1024, cccid=4 00:18:47.734 [2024-04-24 20:10:29.722182] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x51df40) on tqpair(0x4d5300): expected_datao=0, payload_size=1024 00:18:47.735 [2024-04-24 20:10:29.722185] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722190] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722193] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722197] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.735 [2024-04-24 20:10:29.722202] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.735 [2024-04-24 20:10:29.722204] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722207] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51e0a0) on tqpair=0x4d5300 00:18:47.735 [2024-04-24 20:10:29.722219] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.735 [2024-04-24 20:10:29.722225] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.735 [2024-04-24 20:10:29.722227] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722230] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51df40) on tqpair=0x4d5300 00:18:47.735 [2024-04-24 20:10:29.722251] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722255] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4d5300) 00:18:47.735 [2024-04-24 20:10:29.722260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.735 [2024-04-24 20:10:29.722293] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51df40, cid 4, qid 0 00:18:47.735 [2024-04-24 20:10:29.722360] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:47.735 [2024-04-24 20:10:29.722365] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:47.735 [2024-04-24 20:10:29.722368] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722370] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4d5300): datao=0, datal=3072, cccid=4 00:18:47.735 [2024-04-24 20:10:29.722373] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x51df40) on tqpair(0x4d5300): expected_datao=0, payload_size=3072 00:18:47.735 [2024-04-24 20:10:29.722376] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722382] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722385] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722404] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.735 [2024-04-24 20:10:29.722409] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.735 [2024-04-24 20:10:29.722412] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722415] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51df40) on tqpair=0x4d5300 00:18:47.735 [2024-04-24 20:10:29.722422] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722425] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4d5300) 00:18:47.735 [2024-04-24 20:10:29.722430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.735 [2024-04-24 20:10:29.722446] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51df40, cid 4, qid 0 00:18:47.735 [2024-04-24 20:10:29.722501] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:47.735 [2024-04-24 20:10:29.722515] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:47.735 [2024-04-24 20:10:29.722518] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722520] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4d5300): datao=0, datal=8, cccid=4 00:18:47.735 [2024-04-24 20:10:29.722524] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x51df40) on tqpair(0x4d5300): expected_datao=0, payload_size=8 00:18:47.735 [2024-04-24 20:10:29.722527] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722532] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722535] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722547] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.735 [2024-04-24 20:10:29.722552] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.735 [2024-04-24 20:10:29.722555] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.735 [2024-04-24 20:10:29.722558] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51df40) on tqpair=0x4d5300 00:18:47.735 ===================================================== 00:18:47.735 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:47.735 ===================================================== 00:18:47.735 Controller Capabilities/Features 00:18:47.735 ================================ 00:18:47.735 Vendor ID: 0000 00:18:47.735 Subsystem Vendor ID: 0000 00:18:47.735 Serial Number: .................... 00:18:47.735 Model Number: ........................................ 00:18:47.735 Firmware Version: 24.05 00:18:47.735 Recommended Arb Burst: 0 00:18:47.735 IEEE OUI Identifier: 00 00 00 00:18:47.735 Multi-path I/O 00:18:47.735 May have multiple subsystem ports: No 00:18:47.735 May have multiple controllers: No 00:18:47.735 Associated with SR-IOV VF: No 00:18:47.735 Max Data Transfer Size: 131072 00:18:47.735 Max Number of Namespaces: 0 00:18:47.735 Max Number of I/O Queues: 1024 00:18:47.735 NVMe Specification Version (VS): 1.3 00:18:47.735 NVMe Specification Version (Identify): 1.3 00:18:47.735 Maximum Queue Entries: 128 00:18:47.735 Contiguous Queues Required: Yes 00:18:47.735 Arbitration Mechanisms Supported 00:18:47.735 Weighted Round Robin: Not Supported 00:18:47.735 Vendor Specific: Not Supported 00:18:47.735 Reset Timeout: 15000 ms 00:18:47.735 Doorbell Stride: 4 bytes 00:18:47.735 NVM Subsystem Reset: Not Supported 00:18:47.735 Command Sets Supported 00:18:47.735 NVM Command Set: Supported 00:18:47.735 Boot Partition: Not Supported 00:18:47.735 Memory Page Size Minimum: 4096 bytes 00:18:47.735 Memory Page Size Maximum: 4096 bytes 00:18:47.735 Persistent Memory Region: Not Supported 00:18:47.735 Optional Asynchronous Events Supported 00:18:47.735 Namespace Attribute Notices: Not Supported 00:18:47.735 Firmware Activation Notices: Not Supported 00:18:47.735 ANA Change Notices: Not Supported 00:18:47.735 PLE Aggregate Log Change Notices: Not Supported 00:18:47.735 LBA Status Info Alert Notices: Not Supported 00:18:47.735 EGE Aggregate Log Change Notices: Not Supported 00:18:47.735 Normal NVM Subsystem Shutdown event: Not Supported 00:18:47.735 Zone Descriptor Change Notices: Not Supported 00:18:47.735 Discovery Log Change Notices: Supported 00:18:47.735 Controller Attributes 00:18:47.735 128-bit Host Identifier: Not Supported 00:18:47.735 Non-Operational Permissive Mode: Not Supported 00:18:47.735 NVM Sets: Not Supported 00:18:47.735 Read Recovery Levels: Not Supported 00:18:47.735 Endurance Groups: Not Supported 00:18:47.735 Predictable Latency Mode: Not Supported 00:18:47.735 Traffic Based Keep ALive: Not Supported 00:18:47.735 Namespace Granularity: Not Supported 00:18:47.735 SQ Associations: Not Supported 00:18:47.735 UUID List: Not Supported 00:18:47.735 Multi-Domain Subsystem: Not Supported 00:18:47.735 Fixed Capacity Management: Not Supported 00:18:47.735 Variable Capacity Management: Not Supported 00:18:47.735 Delete Endurance Group: Not Supported 00:18:47.735 Delete NVM Set: Not Supported 00:18:47.735 Extended LBA Formats Supported: Not Supported 00:18:47.735 Flexible Data Placement Supported: Not Supported 00:18:47.735 00:18:47.735 Controller Memory Buffer Support 00:18:47.735 ================================ 00:18:47.735 Supported: No 00:18:47.735 00:18:47.735 Persistent Memory Region Support 00:18:47.735 ================================ 00:18:47.735 Supported: No 00:18:47.735 00:18:47.735 Admin Command Set Attributes 00:18:47.735 ============================ 00:18:47.735 Security Send/Receive: Not Supported 00:18:47.735 Format NVM: Not Supported 00:18:47.735 Firmware Activate/Download: Not Supported 00:18:47.735 Namespace Management: Not Supported 00:18:47.735 Device Self-Test: Not Supported 00:18:47.735 Directives: Not Supported 00:18:47.735 NVMe-MI: Not Supported 00:18:47.735 Virtualization Management: Not Supported 00:18:47.735 Doorbell Buffer Config: Not Supported 00:18:47.735 Get LBA Status Capability: Not Supported 00:18:47.735 Command & Feature Lockdown Capability: Not Supported 00:18:47.735 Abort Command Limit: 1 00:18:47.735 Async Event Request Limit: 4 00:18:47.735 Number of Firmware Slots: N/A 00:18:47.735 Firmware Slot 1 Read-Only: N/A 00:18:47.735 Firmware Activation Without Reset: N/A 00:18:47.735 Multiple Update Detection Support: N/A 00:18:47.735 Firmware Update Granularity: No Information Provided 00:18:47.735 Per-Namespace SMART Log: No 00:18:47.735 Asymmetric Namespace Access Log Page: Not Supported 00:18:47.735 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:47.735 Command Effects Log Page: Not Supported 00:18:47.735 Get Log Page Extended Data: Supported 00:18:47.735 Telemetry Log Pages: Not Supported 00:18:47.735 Persistent Event Log Pages: Not Supported 00:18:47.735 Supported Log Pages Log Page: May Support 00:18:47.735 Commands Supported & Effects Log Page: Not Supported 00:18:47.735 Feature Identifiers & Effects Log Page:May Support 00:18:47.735 NVMe-MI Commands & Effects Log Page: May Support 00:18:47.735 Data Area 4 for Telemetry Log: Not Supported 00:18:47.735 Error Log Page Entries Supported: 128 00:18:47.735 Keep Alive: Not Supported 00:18:47.735 00:18:47.735 NVM Command Set Attributes 00:18:47.735 ========================== 00:18:47.735 Submission Queue Entry Size 00:18:47.735 Max: 1 00:18:47.735 Min: 1 00:18:47.735 Completion Queue Entry Size 00:18:47.735 Max: 1 00:18:47.735 Min: 1 00:18:47.735 Number of Namespaces: 0 00:18:47.735 Compare Command: Not Supported 00:18:47.735 Write Uncorrectable Command: Not Supported 00:18:47.735 Dataset Management Command: Not Supported 00:18:47.736 Write Zeroes Command: Not Supported 00:18:47.736 Set Features Save Field: Not Supported 00:18:47.736 Reservations: Not Supported 00:18:47.736 Timestamp: Not Supported 00:18:47.736 Copy: Not Supported 00:18:47.736 Volatile Write Cache: Not Present 00:18:47.736 Atomic Write Unit (Normal): 1 00:18:47.736 Atomic Write Unit (PFail): 1 00:18:47.736 Atomic Compare & Write Unit: 1 00:18:47.736 Fused Compare & Write: Supported 00:18:47.736 Scatter-Gather List 00:18:47.736 SGL Command Set: Supported 00:18:47.736 SGL Keyed: Supported 00:18:47.736 SGL Bit Bucket Descriptor: Not Supported 00:18:47.736 SGL Metadata Pointer: Not Supported 00:18:47.736 Oversized SGL: Not Supported 00:18:47.736 SGL Metadata Address: Not Supported 00:18:47.736 SGL Offset: Supported 00:18:47.736 Transport SGL Data Block: Not Supported 00:18:47.736 Replay Protected Memory Block: Not Supported 00:18:47.736 00:18:47.736 Firmware Slot Information 00:18:47.736 ========================= 00:18:47.736 Active slot: 0 00:18:47.736 00:18:47.736 00:18:47.736 Error Log 00:18:47.736 ========= 00:18:47.736 00:18:47.736 Active Namespaces 00:18:47.736 ================= 00:18:47.736 Discovery Log Page 00:18:47.736 ================== 00:18:47.736 Generation Counter: 2 00:18:47.736 Number of Records: 2 00:18:47.736 Record Format: 0 00:18:47.736 00:18:47.736 Discovery Log Entry 0 00:18:47.736 ---------------------- 00:18:47.736 Transport Type: 3 (TCP) 00:18:47.736 Address Family: 1 (IPv4) 00:18:47.736 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:47.736 Entry Flags: 00:18:47.736 Duplicate Returned Information: 1 00:18:47.736 Explicit Persistent Connection Support for Discovery: 1 00:18:47.736 Transport Requirements: 00:18:47.736 Secure Channel: Not Required 00:18:47.736 Port ID: 0 (0x0000) 00:18:47.736 Controller ID: 65535 (0xffff) 00:18:47.736 Admin Max SQ Size: 128 00:18:47.736 Transport Service Identifier: 4420 00:18:47.736 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:47.736 Transport Address: 10.0.0.2 00:18:47.736 Discovery Log Entry 1 00:18:47.736 ---------------------- 00:18:47.736 Transport Type: 3 (TCP) 00:18:47.736 Address Family: 1 (IPv4) 00:18:47.736 Subsystem Type: 2 (NVM Subsystem) 00:18:47.736 Entry Flags: 00:18:47.736 Duplicate Returned Information: 0 00:18:47.736 Explicit Persistent Connection Support for Discovery: 0 00:18:47.736 Transport Requirements: 00:18:47.736 Secure Channel: Not Required 00:18:47.736 Port ID: 0 (0x0000) 00:18:47.736 Controller ID: 65535 (0xffff) 00:18:47.736 Admin Max SQ Size: 128 00:18:47.736 Transport Service Identifier: 4420 00:18:47.736 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:18:47.736 Transport Address: 10.0.0.2 [2024-04-24 20:10:29.722637] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:18:47.736 [2024-04-24 20:10:29.722648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.736 [2024-04-24 20:10:29.722654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.736 [2024-04-24 20:10:29.722659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.736 [2024-04-24 20:10:29.722664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.736 [2024-04-24 20:10:29.722671] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.722674] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.722677] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.736 [2024-04-24 20:10:29.722683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.736 [2024-04-24 20:10:29.722697] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.736 [2024-04-24 20:10:29.722747] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.736 [2024-04-24 20:10:29.722752] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.736 [2024-04-24 20:10:29.722755] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.722757] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.736 [2024-04-24 20:10:29.722766] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.722770] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.722773] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.736 [2024-04-24 20:10:29.722778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.736 [2024-04-24 20:10:29.722793] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.736 [2024-04-24 20:10:29.722861] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.736 [2024-04-24 20:10:29.722866] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.736 [2024-04-24 20:10:29.722869] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.722872] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.736 [2024-04-24 20:10:29.722876] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:18:47.736 [2024-04-24 20:10:29.722879] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:18:47.736 [2024-04-24 20:10:29.722886] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.722889] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.722892] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.736 [2024-04-24 20:10:29.722898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.736 [2024-04-24 20:10:29.722910] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.736 [2024-04-24 20:10:29.722953] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.736 [2024-04-24 20:10:29.722959] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.736 [2024-04-24 20:10:29.722961] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.722964] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.736 [2024-04-24 20:10:29.722972] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.722975] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.722978] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.736 [2024-04-24 20:10:29.722984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.736 [2024-04-24 20:10:29.722995] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.736 [2024-04-24 20:10:29.723051] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.736 [2024-04-24 20:10:29.723056] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.736 [2024-04-24 20:10:29.723059] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.723062] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.736 [2024-04-24 20:10:29.723070] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.723073] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.723075] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.736 [2024-04-24 20:10:29.723081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.736 [2024-04-24 20:10:29.723092] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.736 [2024-04-24 20:10:29.723134] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.736 [2024-04-24 20:10:29.723139] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.736 [2024-04-24 20:10:29.723141] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.723144] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.736 [2024-04-24 20:10:29.723152] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.723155] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.736 [2024-04-24 20:10:29.723158] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.736 [2024-04-24 20:10:29.723164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.736 [2024-04-24 20:10:29.723175] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.737 [2024-04-24 20:10:29.723219] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.737 [2024-04-24 20:10:29.723224] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.737 [2024-04-24 20:10:29.723227] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723230] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.737 [2024-04-24 20:10:29.723237] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723240] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723243] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.737 [2024-04-24 20:10:29.723249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.737 [2024-04-24 20:10:29.723260] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.737 [2024-04-24 20:10:29.723306] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.737 [2024-04-24 20:10:29.723312] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.737 [2024-04-24 20:10:29.723314] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723317] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.737 [2024-04-24 20:10:29.723325] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723328] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723330] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.737 [2024-04-24 20:10:29.723336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.737 [2024-04-24 20:10:29.723347] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.737 [2024-04-24 20:10:29.723411] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.737 [2024-04-24 20:10:29.723417] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.737 [2024-04-24 20:10:29.723420] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723422] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.737 [2024-04-24 20:10:29.723430] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723433] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723436] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.737 [2024-04-24 20:10:29.723442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.737 [2024-04-24 20:10:29.723454] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.737 [2024-04-24 20:10:29.723494] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.737 [2024-04-24 20:10:29.723499] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.737 [2024-04-24 20:10:29.723501] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723504] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.737 [2024-04-24 20:10:29.723523] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723526] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723528] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.737 [2024-04-24 20:10:29.723534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.737 [2024-04-24 20:10:29.723544] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.737 [2024-04-24 20:10:29.723594] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.737 [2024-04-24 20:10:29.723599] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.737 [2024-04-24 20:10:29.723601] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723604] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.737 [2024-04-24 20:10:29.723611] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723614] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723616] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.737 [2024-04-24 20:10:29.723621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.737 [2024-04-24 20:10:29.723632] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.737 [2024-04-24 20:10:29.723675] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.737 [2024-04-24 20:10:29.723680] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.737 [2024-04-24 20:10:29.723683] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723685] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.737 [2024-04-24 20:10:29.723692] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723695] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723697] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.737 [2024-04-24 20:10:29.723703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.737 [2024-04-24 20:10:29.723713] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.737 [2024-04-24 20:10:29.723761] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.737 [2024-04-24 20:10:29.723766] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.737 [2024-04-24 20:10:29.723768] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723770] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.737 [2024-04-24 20:10:29.723778] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723781] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723785] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.737 [2024-04-24 20:10:29.723792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.737 [2024-04-24 20:10:29.723824] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.737 [2024-04-24 20:10:29.723872] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.737 [2024-04-24 20:10:29.723877] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.737 [2024-04-24 20:10:29.723880] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723883] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.737 [2024-04-24 20:10:29.723891] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723894] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723897] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.737 [2024-04-24 20:10:29.723903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.737 [2024-04-24 20:10:29.723915] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.737 [2024-04-24 20:10:29.723967] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.737 [2024-04-24 20:10:29.723973] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.737 [2024-04-24 20:10:29.723975] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723978] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.737 [2024-04-24 20:10:29.723986] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723989] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.723992] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.737 [2024-04-24 20:10:29.723997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.737 [2024-04-24 20:10:29.724009] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.737 [2024-04-24 20:10:29.724060] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.737 [2024-04-24 20:10:29.724065] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.737 [2024-04-24 20:10:29.724067] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.724070] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.737 [2024-04-24 20:10:29.724078] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.724081] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.737 [2024-04-24 20:10:29.724084] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.737 [2024-04-24 20:10:29.724089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.737 [2024-04-24 20:10:29.724101] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.737 [2024-04-24 20:10:29.724151] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.737 [2024-04-24 20:10:29.724156] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.738 [2024-04-24 20:10:29.724159] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.724162] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.738 [2024-04-24 20:10:29.724169] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.724172] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.724175] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.738 [2024-04-24 20:10:29.724180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.738 [2024-04-24 20:10:29.724192] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.738 [2024-04-24 20:10:29.724240] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.738 [2024-04-24 20:10:29.724246] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.738 [2024-04-24 20:10:29.724248] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.724251] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.738 [2024-04-24 20:10:29.724259] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.724262] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.724265] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.738 [2024-04-24 20:10:29.724271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.738 [2024-04-24 20:10:29.724282] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.738 [2024-04-24 20:10:29.724337] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.738 [2024-04-24 20:10:29.724343] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.738 [2024-04-24 20:10:29.724345] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.724348] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.738 [2024-04-24 20:10:29.724356] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.724359] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.724362] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.738 [2024-04-24 20:10:29.724367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.738 [2024-04-24 20:10:29.724379] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.738 [2024-04-24 20:10:29.728424] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.738 [2024-04-24 20:10:29.728441] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.738 [2024-04-24 20:10:29.728444] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.728447] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.738 [2024-04-24 20:10:29.728458] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.728461] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.728464] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4d5300) 00:18:47.738 [2024-04-24 20:10:29.728470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.738 [2024-04-24 20:10:29.728488] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x51dde0, cid 3, qid 0 00:18:47.738 [2024-04-24 20:10:29.728538] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.738 [2024-04-24 20:10:29.728543] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.738 [2024-04-24 20:10:29.728547] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.728550] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x51dde0) on tqpair=0x4d5300 00:18:47.738 [2024-04-24 20:10:29.728555] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:18:47.738 00:18:47.738 20:10:29 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:18:47.738 [2024-04-24 20:10:29.759203] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:47.738 [2024-04-24 20:10:29.759248] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71675 ] 00:18:47.738 [2024-04-24 20:10:29.896992] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:18:47.738 [2024-04-24 20:10:29.897047] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:47.738 [2024-04-24 20:10:29.897052] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:47.738 [2024-04-24 20:10:29.897062] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:47.738 [2024-04-24 20:10:29.897074] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:18:47.738 [2024-04-24 20:10:29.897196] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:18:47.738 [2024-04-24 20:10:29.897233] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9a8300 0 00:18:47.738 [2024-04-24 20:10:29.909414] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:47.738 [2024-04-24 20:10:29.909430] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:47.738 [2024-04-24 20:10:29.909434] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:47.738 [2024-04-24 20:10:29.909436] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:47.738 [2024-04-24 20:10:29.909473] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.909478] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.909482] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a8300) 00:18:47.738 [2024-04-24 20:10:29.909493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:47.738 [2024-04-24 20:10:29.909514] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f09c0, cid 0, qid 0 00:18:47.738 [2024-04-24 20:10:29.917398] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.738 [2024-04-24 20:10:29.917411] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.738 [2024-04-24 20:10:29.917414] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.917417] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f09c0) on tqpair=0x9a8300 00:18:47.738 [2024-04-24 20:10:29.917426] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:47.738 [2024-04-24 20:10:29.917433] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:18:47.738 [2024-04-24 20:10:29.917437] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:18:47.738 [2024-04-24 20:10:29.917450] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.917453] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.917455] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a8300) 00:18:47.738 [2024-04-24 20:10:29.917462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.738 [2024-04-24 20:10:29.917478] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f09c0, cid 0, qid 0 00:18:47.738 [2024-04-24 20:10:29.917517] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.738 [2024-04-24 20:10:29.917521] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.738 [2024-04-24 20:10:29.917524] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.917526] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f09c0) on tqpair=0x9a8300 00:18:47.738 [2024-04-24 20:10:29.917532] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:18:47.738 [2024-04-24 20:10:29.917536] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:18:47.738 [2024-04-24 20:10:29.917541] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.917544] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.917546] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a8300) 00:18:47.738 [2024-04-24 20:10:29.917567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.738 [2024-04-24 20:10:29.917579] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f09c0, cid 0, qid 0 00:18:47.738 [2024-04-24 20:10:29.917621] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.738 [2024-04-24 20:10:29.917631] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.738 [2024-04-24 20:10:29.917633] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.917636] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f09c0) on tqpair=0x9a8300 00:18:47.738 [2024-04-24 20:10:29.917640] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:18:47.738 [2024-04-24 20:10:29.917646] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:18:47.738 [2024-04-24 20:10:29.917664] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.917667] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.738 [2024-04-24 20:10:29.917669] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a8300) 00:18:47.739 [2024-04-24 20:10:29.917674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.739 [2024-04-24 20:10:29.917700] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f09c0, cid 0, qid 0 00:18:47.739 [2024-04-24 20:10:29.917741] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.739 [2024-04-24 20:10:29.917746] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.739 [2024-04-24 20:10:29.917749] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.917751] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f09c0) on tqpair=0x9a8300 00:18:47.739 [2024-04-24 20:10:29.917755] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:47.739 [2024-04-24 20:10:29.917762] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.917765] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.917768] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a8300) 00:18:47.739 [2024-04-24 20:10:29.917773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.739 [2024-04-24 20:10:29.917784] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f09c0, cid 0, qid 0 00:18:47.739 [2024-04-24 20:10:29.917832] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.739 [2024-04-24 20:10:29.917838] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.739 [2024-04-24 20:10:29.917840] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.917843] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f09c0) on tqpair=0x9a8300 00:18:47.739 [2024-04-24 20:10:29.917846] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:18:47.739 [2024-04-24 20:10:29.917850] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:18:47.739 [2024-04-24 20:10:29.917855] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:47.739 [2024-04-24 20:10:29.917959] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:18:47.739 [2024-04-24 20:10:29.917969] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:47.739 [2024-04-24 20:10:29.917976] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.917979] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.917982] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a8300) 00:18:47.739 [2024-04-24 20:10:29.917987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.739 [2024-04-24 20:10:29.917999] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f09c0, cid 0, qid 0 00:18:47.739 [2024-04-24 20:10:29.918038] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.739 [2024-04-24 20:10:29.918043] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.739 [2024-04-24 20:10:29.918045] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918048] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f09c0) on tqpair=0x9a8300 00:18:47.739 [2024-04-24 20:10:29.918052] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:47.739 [2024-04-24 20:10:29.918058] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918061] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918064] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a8300) 00:18:47.739 [2024-04-24 20:10:29.918069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.739 [2024-04-24 20:10:29.918080] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f09c0, cid 0, qid 0 00:18:47.739 [2024-04-24 20:10:29.918123] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.739 [2024-04-24 20:10:29.918128] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.739 [2024-04-24 20:10:29.918130] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918133] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f09c0) on tqpair=0x9a8300 00:18:47.739 [2024-04-24 20:10:29.918136] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:47.739 [2024-04-24 20:10:29.918140] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:18:47.739 [2024-04-24 20:10:29.918145] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:18:47.739 [2024-04-24 20:10:29.918152] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:18:47.739 [2024-04-24 20:10:29.918160] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918162] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a8300) 00:18:47.739 [2024-04-24 20:10:29.918168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.739 [2024-04-24 20:10:29.918179] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f09c0, cid 0, qid 0 00:18:47.739 [2024-04-24 20:10:29.918285] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:47.739 [2024-04-24 20:10:29.918292] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:47.739 [2024-04-24 20:10:29.918295] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918298] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a8300): datao=0, datal=4096, cccid=0 00:18:47.739 [2024-04-24 20:10:29.918302] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9f09c0) on tqpair(0x9a8300): expected_datao=0, payload_size=4096 00:18:47.739 [2024-04-24 20:10:29.918306] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918313] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918316] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918323] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.739 [2024-04-24 20:10:29.918328] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.739 [2024-04-24 20:10:29.918331] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918333] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f09c0) on tqpair=0x9a8300 00:18:47.739 [2024-04-24 20:10:29.918341] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:18:47.739 [2024-04-24 20:10:29.918345] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:18:47.739 [2024-04-24 20:10:29.918348] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:18:47.739 [2024-04-24 20:10:29.918355] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:18:47.739 [2024-04-24 20:10:29.918358] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:18:47.739 [2024-04-24 20:10:29.918362] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:18:47.739 [2024-04-24 20:10:29.918369] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:18:47.739 [2024-04-24 20:10:29.918374] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918378] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918380] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a8300) 00:18:47.739 [2024-04-24 20:10:29.918386] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:47.739 [2024-04-24 20:10:29.918409] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f09c0, cid 0, qid 0 00:18:47.739 [2024-04-24 20:10:29.918465] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.739 [2024-04-24 20:10:29.918471] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.739 [2024-04-24 20:10:29.918473] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918476] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f09c0) on tqpair=0x9a8300 00:18:47.739 [2024-04-24 20:10:29.918483] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918486] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918488] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a8300) 00:18:47.739 [2024-04-24 20:10:29.918493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.739 [2024-04-24 20:10:29.918499] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918501] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918504] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9a8300) 00:18:47.739 [2024-04-24 20:10:29.918509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.739 [2024-04-24 20:10:29.918514] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918517] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918519] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9a8300) 00:18:47.739 [2024-04-24 20:10:29.918524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.739 [2024-04-24 20:10:29.918529] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918532] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918534] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a8300) 00:18:47.739 [2024-04-24 20:10:29.918539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.739 [2024-04-24 20:10:29.918543] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:47.739 [2024-04-24 20:10:29.918552] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:47.739 [2024-04-24 20:10:29.918558] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.739 [2024-04-24 20:10:29.918561] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a8300) 00:18:47.739 [2024-04-24 20:10:29.918566] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.739 [2024-04-24 20:10:29.918580] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f09c0, cid 0, qid 0 00:18:47.739 [2024-04-24 20:10:29.918585] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0b20, cid 1, qid 0 00:18:47.740 [2024-04-24 20:10:29.918589] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0c80, cid 2, qid 0 00:18:47.740 [2024-04-24 20:10:29.918593] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0de0, cid 3, qid 0 00:18:47.740 [2024-04-24 20:10:29.918597] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0f40, cid 4, qid 0 00:18:47.740 [2024-04-24 20:10:29.918712] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.740 [2024-04-24 20:10:29.918721] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.740 [2024-04-24 20:10:29.918724] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.918727] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0f40) on tqpair=0x9a8300 00:18:47.740 [2024-04-24 20:10:29.918732] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:18:47.740 [2024-04-24 20:10:29.918735] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:47.740 [2024-04-24 20:10:29.918743] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:18:47.740 [2024-04-24 20:10:29.918748] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:47.740 [2024-04-24 20:10:29.918753] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.918756] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.918758] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a8300) 00:18:47.740 [2024-04-24 20:10:29.918764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:47.740 [2024-04-24 20:10:29.918776] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0f40, cid 4, qid 0 00:18:47.740 [2024-04-24 20:10:29.918825] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.740 [2024-04-24 20:10:29.918830] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.740 [2024-04-24 20:10:29.918833] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.918836] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0f40) on tqpair=0x9a8300 00:18:47.740 [2024-04-24 20:10:29.918883] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:18:47.740 [2024-04-24 20:10:29.918894] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:47.740 [2024-04-24 20:10:29.918901] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.918904] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a8300) 00:18:47.740 [2024-04-24 20:10:29.918910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.740 [2024-04-24 20:10:29.918922] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0f40, cid 4, qid 0 00:18:47.740 [2024-04-24 20:10:29.918979] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:47.740 [2024-04-24 20:10:29.918984] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:47.740 [2024-04-24 20:10:29.918987] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.918989] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a8300): datao=0, datal=4096, cccid=4 00:18:47.740 [2024-04-24 20:10:29.918993] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9f0f40) on tqpair(0x9a8300): expected_datao=0, payload_size=4096 00:18:47.740 [2024-04-24 20:10:29.918996] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919002] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919005] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919011] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.740 [2024-04-24 20:10:29.919016] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.740 [2024-04-24 20:10:29.919019] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919022] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0f40) on tqpair=0x9a8300 00:18:47.740 [2024-04-24 20:10:29.919031] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:18:47.740 [2024-04-24 20:10:29.919042] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:18:47.740 [2024-04-24 20:10:29.919049] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:18:47.740 [2024-04-24 20:10:29.919055] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919058] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a8300) 00:18:47.740 [2024-04-24 20:10:29.919064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.740 [2024-04-24 20:10:29.919076] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0f40, cid 4, qid 0 00:18:47.740 [2024-04-24 20:10:29.919149] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:47.740 [2024-04-24 20:10:29.919154] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:47.740 [2024-04-24 20:10:29.919157] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919160] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a8300): datao=0, datal=4096, cccid=4 00:18:47.740 [2024-04-24 20:10:29.919163] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9f0f40) on tqpair(0x9a8300): expected_datao=0, payload_size=4096 00:18:47.740 [2024-04-24 20:10:29.919166] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919172] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919175] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919181] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.740 [2024-04-24 20:10:29.919186] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.740 [2024-04-24 20:10:29.919189] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919192] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0f40) on tqpair=0x9a8300 00:18:47.740 [2024-04-24 20:10:29.919203] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:47.740 [2024-04-24 20:10:29.919210] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:47.740 [2024-04-24 20:10:29.919216] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919219] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a8300) 00:18:47.740 [2024-04-24 20:10:29.919225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.740 [2024-04-24 20:10:29.919237] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0f40, cid 4, qid 0 00:18:47.740 [2024-04-24 20:10:29.919295] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:47.740 [2024-04-24 20:10:29.919300] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:47.740 [2024-04-24 20:10:29.919303] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919305] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a8300): datao=0, datal=4096, cccid=4 00:18:47.740 [2024-04-24 20:10:29.919309] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9f0f40) on tqpair(0x9a8300): expected_datao=0, payload_size=4096 00:18:47.740 [2024-04-24 20:10:29.919312] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919318] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919320] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919327] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.740 [2024-04-24 20:10:29.919332] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.740 [2024-04-24 20:10:29.919334] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919337] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0f40) on tqpair=0x9a8300 00:18:47.740 [2024-04-24 20:10:29.919344] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:47.740 [2024-04-24 20:10:29.919350] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:18:47.740 [2024-04-24 20:10:29.919357] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:18:47.740 [2024-04-24 20:10:29.919362] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:47.740 [2024-04-24 20:10:29.919366] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:18:47.740 [2024-04-24 20:10:29.919370] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:18:47.740 [2024-04-24 20:10:29.919374] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:18:47.740 [2024-04-24 20:10:29.919387] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:18:47.740 [2024-04-24 20:10:29.919401] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.740 [2024-04-24 20:10:29.919404] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a8300) 00:18:47.740 [2024-04-24 20:10:29.919410] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.740 [2024-04-24 20:10:29.919415] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.919418] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.919421] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9a8300) 00:18:47.741 [2024-04-24 20:10:29.919427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:47.741 [2024-04-24 20:10:29.919444] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0f40, cid 4, qid 0 00:18:47.741 [2024-04-24 20:10:29.919448] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f10a0, cid 5, qid 0 00:18:47.741 [2024-04-24 20:10:29.919518] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.741 [2024-04-24 20:10:29.919527] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.741 [2024-04-24 20:10:29.919530] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.919533] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0f40) on tqpair=0x9a8300 00:18:47.741 [2024-04-24 20:10:29.919539] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.741 [2024-04-24 20:10:29.919544] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.741 [2024-04-24 20:10:29.919547] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.919550] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f10a0) on tqpair=0x9a8300 00:18:47.741 [2024-04-24 20:10:29.919557] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.919560] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9a8300) 00:18:47.741 [2024-04-24 20:10:29.919566] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.741 [2024-04-24 20:10:29.919578] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f10a0, cid 5, qid 0 00:18:47.741 [2024-04-24 20:10:29.919630] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.741 [2024-04-24 20:10:29.919635] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.741 [2024-04-24 20:10:29.919638] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.919641] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f10a0) on tqpair=0x9a8300 00:18:47.741 [2024-04-24 20:10:29.919648] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.919651] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9a8300) 00:18:47.741 [2024-04-24 20:10:29.919657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.741 [2024-04-24 20:10:29.919668] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f10a0, cid 5, qid 0 00:18:47.741 [2024-04-24 20:10:29.919729] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.741 [2024-04-24 20:10:29.919734] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.741 [2024-04-24 20:10:29.919736] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.919739] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f10a0) on tqpair=0x9a8300 00:18:47.741 [2024-04-24 20:10:29.919746] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.919749] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9a8300) 00:18:47.741 [2024-04-24 20:10:29.919754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.741 [2024-04-24 20:10:29.919764] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f10a0, cid 5, qid 0 00:18:47.741 [2024-04-24 20:10:29.919808] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.741 [2024-04-24 20:10:29.919813] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.741 [2024-04-24 20:10:29.919815] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.919818] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f10a0) on tqpair=0x9a8300 00:18:47.741 [2024-04-24 20:10:29.919827] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.919831] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9a8300) 00:18:47.741 [2024-04-24 20:10:29.919836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.741 [2024-04-24 20:10:29.919841] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.919844] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a8300) 00:18:47.741 [2024-04-24 20:10:29.919848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.741 [2024-04-24 20:10:29.919854] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.919857] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x9a8300) 00:18:47.741 [2024-04-24 20:10:29.919862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.741 [2024-04-24 20:10:29.919868] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.919871] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9a8300) 00:18:47.741 [2024-04-24 20:10:29.919876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.741 [2024-04-24 20:10:29.919888] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f10a0, cid 5, qid 0 00:18:47.741 [2024-04-24 20:10:29.919892] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0f40, cid 4, qid 0 00:18:47.741 [2024-04-24 20:10:29.919896] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f1200, cid 6, qid 0 00:18:47.741 [2024-04-24 20:10:29.919899] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f1360, cid 7, qid 0 00:18:47.741 [2024-04-24 20:10:29.920033] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:47.741 [2024-04-24 20:10:29.920047] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:47.741 [2024-04-24 20:10:29.920050] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.920053] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a8300): datao=0, datal=8192, cccid=5 00:18:47.741 [2024-04-24 20:10:29.920056] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9f10a0) on tqpair(0x9a8300): expected_datao=0, payload_size=8192 00:18:47.741 [2024-04-24 20:10:29.920059] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.920071] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.920075] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.920079] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:47.741 [2024-04-24 20:10:29.920085] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:47.741 [2024-04-24 20:10:29.920087] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.920090] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a8300): datao=0, datal=512, cccid=4 00:18:47.741 [2024-04-24 20:10:29.920093] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9f0f40) on tqpair(0x9a8300): expected_datao=0, payload_size=512 00:18:47.741 [2024-04-24 20:10:29.920096] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.920101] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.920103] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.920108] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:47.741 [2024-04-24 20:10:29.920112] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:47.741 [2024-04-24 20:10:29.920115] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.920117] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a8300): datao=0, datal=512, cccid=6 00:18:47.741 [2024-04-24 20:10:29.920120] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9f1200) on tqpair(0x9a8300): expected_datao=0, payload_size=512 00:18:47.741 [2024-04-24 20:10:29.920123] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.920128] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.920130] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.920135] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:47.741 [2024-04-24 20:10:29.920139] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:47.741 [2024-04-24 20:10:29.920142] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.920144] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a8300): datao=0, datal=4096, cccid=7 00:18:47.741 [2024-04-24 20:10:29.920147] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9f1360) on tqpair(0x9a8300): expected_datao=0, payload_size=4096 00:18:47.741 [2024-04-24 20:10:29.920150] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.920155] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.920158] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:47.741 [2024-04-24 20:10:29.920164] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.741 [2024-04-24 20:10:29.920169] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.741 [2024-04-24 20:10:29.920171] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.742 [2024-04-24 20:10:29.920174] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f10a0) on tqpair=0x9a8300 00:18:47.742 [2024-04-24 20:10:29.920185] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.742 [2024-04-24 20:10:29.920190] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.742 [2024-04-24 20:10:29.920193] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.742 [2024-04-24 20:10:29.920195] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0f40) on tqpair=0x9a8300 00:18:47.742 [2024-04-24 20:10:29.920203] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.742 [2024-04-24 20:10:29.920208] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.742 [2024-04-24 20:10:29.920210] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.742 [2024-04-24 20:10:29.920213] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f1200) on tqpair=0x9a8300 00:18:47.742 [2024-04-24 20:10:29.920219] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.742 [2024-04-24 20:10:29.920223] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.742 [2024-04-24 20:10:29.920226] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.742 [2024-04-24 20:10:29.920228] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f1360) on tqpair=0x9a8300 00:18:47.742 ===================================================== 00:18:47.742 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:47.742 ===================================================== 00:18:47.742 Controller Capabilities/Features 00:18:47.742 ================================ 00:18:47.742 Vendor ID: 8086 00:18:47.742 Subsystem Vendor ID: 8086 00:18:47.742 Serial Number: SPDK00000000000001 00:18:47.742 Model Number: SPDK bdev Controller 00:18:47.742 Firmware Version: 24.05 00:18:47.742 Recommended Arb Burst: 6 00:18:47.742 IEEE OUI Identifier: e4 d2 5c 00:18:47.742 Multi-path I/O 00:18:47.742 May have multiple subsystem ports: Yes 00:18:47.742 May have multiple controllers: Yes 00:18:47.742 Associated with SR-IOV VF: No 00:18:47.742 Max Data Transfer Size: 131072 00:18:47.742 Max Number of Namespaces: 32 00:18:47.742 Max Number of I/O Queues: 127 00:18:47.742 NVMe Specification Version (VS): 1.3 00:18:47.742 NVMe Specification Version (Identify): 1.3 00:18:47.742 Maximum Queue Entries: 128 00:18:47.742 Contiguous Queues Required: Yes 00:18:47.742 Arbitration Mechanisms Supported 00:18:47.742 Weighted Round Robin: Not Supported 00:18:47.742 Vendor Specific: Not Supported 00:18:47.742 Reset Timeout: 15000 ms 00:18:47.742 Doorbell Stride: 4 bytes 00:18:47.742 NVM Subsystem Reset: Not Supported 00:18:47.742 Command Sets Supported 00:18:47.742 NVM Command Set: Supported 00:18:47.742 Boot Partition: Not Supported 00:18:47.742 Memory Page Size Minimum: 4096 bytes 00:18:47.742 Memory Page Size Maximum: 4096 bytes 00:18:47.742 Persistent Memory Region: Not Supported 00:18:47.742 Optional Asynchronous Events Supported 00:18:47.742 Namespace Attribute Notices: Supported 00:18:47.742 Firmware Activation Notices: Not Supported 00:18:47.742 ANA Change Notices: Not Supported 00:18:47.742 PLE Aggregate Log Change Notices: Not Supported 00:18:47.742 LBA Status Info Alert Notices: Not Supported 00:18:47.742 EGE Aggregate Log Change Notices: Not Supported 00:18:47.742 Normal NVM Subsystem Shutdown event: Not Supported 00:18:47.742 Zone Descriptor Change Notices: Not Supported 00:18:47.742 Discovery Log Change Notices: Not Supported 00:18:47.742 Controller Attributes 00:18:47.742 128-bit Host Identifier: Supported 00:18:47.742 Non-Operational Permissive Mode: Not Supported 00:18:47.742 NVM Sets: Not Supported 00:18:47.742 Read Recovery Levels: Not Supported 00:18:47.742 Endurance Groups: Not Supported 00:18:47.742 Predictable Latency Mode: Not Supported 00:18:47.742 Traffic Based Keep ALive: Not Supported 00:18:47.742 Namespace Granularity: Not Supported 00:18:47.742 SQ Associations: Not Supported 00:18:47.742 UUID List: Not Supported 00:18:47.742 Multi-Domain Subsystem: Not Supported 00:18:47.742 Fixed Capacity Management: Not Supported 00:18:47.742 Variable Capacity Management: Not Supported 00:18:47.742 Delete Endurance Group: Not Supported 00:18:47.742 Delete NVM Set: Not Supported 00:18:47.742 Extended LBA Formats Supported: Not Supported 00:18:47.742 Flexible Data Placement Supported: Not Supported 00:18:47.742 00:18:47.742 Controller Memory Buffer Support 00:18:47.742 ================================ 00:18:47.742 Supported: No 00:18:47.742 00:18:47.742 Persistent Memory Region Support 00:18:47.742 ================================ 00:18:47.742 Supported: No 00:18:47.742 00:18:47.742 Admin Command Set Attributes 00:18:47.742 ============================ 00:18:47.742 Security Send/Receive: Not Supported 00:18:47.742 Format NVM: Not Supported 00:18:47.742 Firmware Activate/Download: Not Supported 00:18:47.742 Namespace Management: Not Supported 00:18:47.742 Device Self-Test: Not Supported 00:18:47.742 Directives: Not Supported 00:18:47.742 NVMe-MI: Not Supported 00:18:47.742 Virtualization Management: Not Supported 00:18:47.742 Doorbell Buffer Config: Not Supported 00:18:47.742 Get LBA Status Capability: Not Supported 00:18:47.742 Command & Feature Lockdown Capability: Not Supported 00:18:47.742 Abort Command Limit: 4 00:18:47.742 Async Event Request Limit: 4 00:18:47.742 Number of Firmware Slots: N/A 00:18:47.742 Firmware Slot 1 Read-Only: N/A 00:18:47.742 Firmware Activation Without Reset: N/A 00:18:47.742 Multiple Update Detection Support: N/A 00:18:47.742 Firmware Update Granularity: No Information Provided 00:18:47.742 Per-Namespace SMART Log: No 00:18:47.742 Asymmetric Namespace Access Log Page: Not Supported 00:18:47.742 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:18:47.742 Command Effects Log Page: Supported 00:18:47.742 Get Log Page Extended Data: Supported 00:18:47.742 Telemetry Log Pages: Not Supported 00:18:47.742 Persistent Event Log Pages: Not Supported 00:18:47.742 Supported Log Pages Log Page: May Support 00:18:47.742 Commands Supported & Effects Log Page: Not Supported 00:18:47.742 Feature Identifiers & Effects Log Page:May Support 00:18:47.742 NVMe-MI Commands & Effects Log Page: May Support 00:18:47.742 Data Area 4 for Telemetry Log: Not Supported 00:18:47.742 Error Log Page Entries Supported: 128 00:18:47.742 Keep Alive: Supported 00:18:47.742 Keep Alive Granularity: 10000 ms 00:18:47.742 00:18:47.742 NVM Command Set Attributes 00:18:47.742 ========================== 00:18:47.742 Submission Queue Entry Size 00:18:47.742 Max: 64 00:18:47.742 Min: 64 00:18:47.742 Completion Queue Entry Size 00:18:47.742 Max: 16 00:18:47.742 Min: 16 00:18:47.742 Number of Namespaces: 32 00:18:47.742 Compare Command: Supported 00:18:47.742 Write Uncorrectable Command: Not Supported 00:18:47.742 Dataset Management Command: Supported 00:18:47.742 Write Zeroes Command: Supported 00:18:47.742 Set Features Save Field: Not Supported 00:18:47.742 Reservations: Supported 00:18:47.742 Timestamp: Not Supported 00:18:47.742 Copy: Supported 00:18:47.742 Volatile Write Cache: Present 00:18:47.742 Atomic Write Unit (Normal): 1 00:18:47.742 Atomic Write Unit (PFail): 1 00:18:47.742 Atomic Compare & Write Unit: 1 00:18:47.742 Fused Compare & Write: Supported 00:18:47.742 Scatter-Gather List 00:18:47.742 SGL Command Set: Supported 00:18:47.742 SGL Keyed: Supported 00:18:47.742 SGL Bit Bucket Descriptor: Not Supported 00:18:47.742 SGL Metadata Pointer: Not Supported 00:18:47.742 Oversized SGL: Not Supported 00:18:47.742 SGL Metadata Address: Not Supported 00:18:47.742 SGL Offset: Supported 00:18:47.742 Transport SGL Data Block: Not Supported 00:18:47.742 Replay Protected Memory Block: Not Supported 00:18:47.742 00:18:47.743 Firmware Slot Information 00:18:47.743 ========================= 00:18:47.743 Active slot: 1 00:18:47.743 Slot 1 Firmware Revision: 24.05 00:18:47.743 00:18:47.743 00:18:47.743 Commands Supported and Effects 00:18:47.743 ============================== 00:18:47.743 Admin Commands 00:18:47.743 -------------- 00:18:47.743 Get Log Page (02h): Supported 00:18:47.743 Identify (06h): Supported 00:18:47.743 Abort (08h): Supported 00:18:47.743 Set Features (09h): Supported 00:18:47.743 Get Features (0Ah): Supported 00:18:47.743 Asynchronous Event Request (0Ch): Supported 00:18:47.743 Keep Alive (18h): Supported 00:18:47.743 I/O Commands 00:18:47.743 ------------ 00:18:47.743 Flush (00h): Supported LBA-Change 00:18:47.743 Write (01h): Supported LBA-Change 00:18:47.743 Read (02h): Supported 00:18:47.743 Compare (05h): Supported 00:18:47.743 Write Zeroes (08h): Supported LBA-Change 00:18:47.743 Dataset Management (09h): Supported LBA-Change 00:18:47.743 Copy (19h): Supported LBA-Change 00:18:47.743 Unknown (79h): Supported LBA-Change 00:18:47.743 Unknown (7Ah): Supported 00:18:47.743 00:18:47.743 Error Log 00:18:47.743 ========= 00:18:47.743 00:18:47.743 Arbitration 00:18:47.743 =========== 00:18:47.743 Arbitration Burst: 1 00:18:47.743 00:18:47.743 Power Management 00:18:47.743 ================ 00:18:47.743 Number of Power States: 1 00:18:47.743 Current Power State: Power State #0 00:18:47.743 Power State #0: 00:18:47.743 Max Power: 0.00 W 00:18:47.743 Non-Operational State: Operational 00:18:47.743 Entry Latency: Not Reported 00:18:47.743 Exit Latency: Not Reported 00:18:47.743 Relative Read Throughput: 0 00:18:47.743 Relative Read Latency: 0 00:18:47.743 Relative Write Throughput: 0 00:18:47.743 Relative Write Latency: 0 00:18:47.743 Idle Power: Not Reported 00:18:47.743 Active Power: Not Reported 00:18:47.743 Non-Operational Permissive Mode: Not Supported 00:18:47.743 00:18:47.743 Health Information 00:18:47.743 ================== 00:18:47.743 Critical Warnings: 00:18:47.743 Available Spare Space: OK 00:18:47.743 Temperature: OK 00:18:47.743 Device Reliability: OK 00:18:47.743 Read Only: No 00:18:47.743 Volatile Memory Backup: OK 00:18:47.743 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:47.743 Temperature Threshold: [2024-04-24 20:10:29.920323] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920328] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9a8300) 00:18:47.743 [2024-04-24 20:10:29.920333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.743 [2024-04-24 20:10:29.920347] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f1360, cid 7, qid 0 00:18:47.743 [2024-04-24 20:10:29.920399] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.743 [2024-04-24 20:10:29.920405] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.743 [2024-04-24 20:10:29.920407] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920410] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f1360) on tqpair=0x9a8300 00:18:47.743 [2024-04-24 20:10:29.920435] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:18:47.743 [2024-04-24 20:10:29.920445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.743 [2024-04-24 20:10:29.920450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.743 [2024-04-24 20:10:29.920454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.743 [2024-04-24 20:10:29.920459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:47.743 [2024-04-24 20:10:29.920465] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920468] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920470] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a8300) 00:18:47.743 [2024-04-24 20:10:29.920476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.743 [2024-04-24 20:10:29.920490] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0de0, cid 3, qid 0 00:18:47.743 [2024-04-24 20:10:29.920550] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.743 [2024-04-24 20:10:29.920555] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.743 [2024-04-24 20:10:29.920558] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920560] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0de0) on tqpair=0x9a8300 00:18:47.743 [2024-04-24 20:10:29.920566] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920569] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920572] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a8300) 00:18:47.743 [2024-04-24 20:10:29.920577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.743 [2024-04-24 20:10:29.920590] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0de0, cid 3, qid 0 00:18:47.743 [2024-04-24 20:10:29.920652] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.743 [2024-04-24 20:10:29.920657] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.743 [2024-04-24 20:10:29.920659] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920662] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0de0) on tqpair=0x9a8300 00:18:47.743 [2024-04-24 20:10:29.920665] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:18:47.743 [2024-04-24 20:10:29.920669] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:18:47.743 [2024-04-24 20:10:29.920677] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920680] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920682] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a8300) 00:18:47.743 [2024-04-24 20:10:29.920688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.743 [2024-04-24 20:10:29.920698] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0de0, cid 3, qid 0 00:18:47.743 [2024-04-24 20:10:29.920747] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.743 [2024-04-24 20:10:29.920752] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.743 [2024-04-24 20:10:29.920755] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920758] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0de0) on tqpair=0x9a8300 00:18:47.743 [2024-04-24 20:10:29.920766] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920769] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920771] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a8300) 00:18:47.743 [2024-04-24 20:10:29.920776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.743 [2024-04-24 20:10:29.920787] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0de0, cid 3, qid 0 00:18:47.743 [2024-04-24 20:10:29.920842] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.743 [2024-04-24 20:10:29.920846] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.743 [2024-04-24 20:10:29.920848] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920851] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0de0) on tqpair=0x9a8300 00:18:47.743 [2024-04-24 20:10:29.920857] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920860] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920862] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a8300) 00:18:47.743 [2024-04-24 20:10:29.920867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.743 [2024-04-24 20:10:29.920876] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0de0, cid 3, qid 0 00:18:47.743 [2024-04-24 20:10:29.920913] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.743 [2024-04-24 20:10:29.920918] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.743 [2024-04-24 20:10:29.920920] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.743 [2024-04-24 20:10:29.920923] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0de0) on tqpair=0x9a8300 00:18:47.743 [2024-04-24 20:10:29.920929] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.920932] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.920934] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a8300) 00:18:47.744 [2024-04-24 20:10:29.920939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.744 [2024-04-24 20:10:29.920948] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0de0, cid 3, qid 0 00:18:47.744 [2024-04-24 20:10:29.920987] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.744 [2024-04-24 20:10:29.920992] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.744 [2024-04-24 20:10:29.920994] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.920996] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0de0) on tqpair=0x9a8300 00:18:47.744 [2024-04-24 20:10:29.921002] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.921005] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.921007] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a8300) 00:18:47.744 [2024-04-24 20:10:29.921012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.744 [2024-04-24 20:10:29.921022] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0de0, cid 3, qid 0 00:18:47.744 [2024-04-24 20:10:29.921063] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.744 [2024-04-24 20:10:29.921068] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.744 [2024-04-24 20:10:29.921070] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.921073] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0de0) on tqpair=0x9a8300 00:18:47.744 [2024-04-24 20:10:29.921079] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.921082] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.921084] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a8300) 00:18:47.744 [2024-04-24 20:10:29.921089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.744 [2024-04-24 20:10:29.921098] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0de0, cid 3, qid 0 00:18:47.744 [2024-04-24 20:10:29.921148] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.744 [2024-04-24 20:10:29.921153] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.744 [2024-04-24 20:10:29.921155] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.921158] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0de0) on tqpair=0x9a8300 00:18:47.744 [2024-04-24 20:10:29.921164] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.921167] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.921169] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a8300) 00:18:47.744 [2024-04-24 20:10:29.921174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.744 [2024-04-24 20:10:29.921183] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0de0, cid 3, qid 0 00:18:47.744 [2024-04-24 20:10:29.921228] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.744 [2024-04-24 20:10:29.921232] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.744 [2024-04-24 20:10:29.921235] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.921237] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0de0) on tqpair=0x9a8300 00:18:47.744 [2024-04-24 20:10:29.921243] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.921246] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.921249] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a8300) 00:18:47.744 [2024-04-24 20:10:29.921254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.744 [2024-04-24 20:10:29.921264] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0de0, cid 3, qid 0 00:18:47.744 [2024-04-24 20:10:29.921300] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.744 [2024-04-24 20:10:29.921305] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.744 [2024-04-24 20:10:29.921307] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.921310] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0de0) on tqpair=0x9a8300 00:18:47.744 [2024-04-24 20:10:29.921316] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.921319] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.921321] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a8300) 00:18:47.744 [2024-04-24 20:10:29.921326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.744 [2024-04-24 20:10:29.921335] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0de0, cid 3, qid 0 00:18:47.744 [2024-04-24 20:10:29.921376] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.744 [2024-04-24 20:10:29.921380] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.744 [2024-04-24 20:10:29.921383] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.921385] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0de0) on tqpair=0x9a8300 00:18:47.744 [2024-04-24 20:10:29.925392] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.925397] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.925399] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a8300) 00:18:47.744 [2024-04-24 20:10:29.925405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.744 [2024-04-24 20:10:29.925419] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9f0de0, cid 3, qid 0 00:18:47.744 [2024-04-24 20:10:29.925467] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:47.744 [2024-04-24 20:10:29.925472] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:47.744 [2024-04-24 20:10:29.925474] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:47.744 [2024-04-24 20:10:29.925477] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9f0de0) on tqpair=0x9a8300 00:18:47.744 [2024-04-24 20:10:29.925482] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:18:47.744 0 Kelvin (-273 Celsius) 00:18:47.744 Available Spare: 0% 00:18:47.744 Available Spare Threshold: 0% 00:18:47.744 Life Percentage Used: 0% 00:18:47.744 Data Units Read: 0 00:18:47.744 Data Units Written: 0 00:18:47.744 Host Read Commands: 0 00:18:47.744 Host Write Commands: 0 00:18:47.744 Controller Busy Time: 0 minutes 00:18:47.744 Power Cycles: 0 00:18:47.744 Power On Hours: 0 hours 00:18:47.744 Unsafe Shutdowns: 0 00:18:47.744 Unrecoverable Media Errors: 0 00:18:47.744 Lifetime Error Log Entries: 0 00:18:47.744 Warning Temperature Time: 0 minutes 00:18:47.744 Critical Temperature Time: 0 minutes 00:18:47.744 00:18:47.744 Number of Queues 00:18:47.744 ================ 00:18:47.744 Number of I/O Submission Queues: 127 00:18:47.744 Number of I/O Completion Queues: 127 00:18:47.744 00:18:47.744 Active Namespaces 00:18:47.744 ================= 00:18:47.744 Namespace ID:1 00:18:47.744 Error Recovery Timeout: Unlimited 00:18:47.744 Command Set Identifier: NVM (00h) 00:18:47.744 Deallocate: Supported 00:18:47.744 Deallocated/Unwritten Error: Not Supported 00:18:47.744 Deallocated Read Value: Unknown 00:18:47.744 Deallocate in Write Zeroes: Not Supported 00:18:47.744 Deallocated Guard Field: 0xFFFF 00:18:47.744 Flush: Supported 00:18:47.744 Reservation: Supported 00:18:47.744 Namespace Sharing Capabilities: Multiple Controllers 00:18:47.744 Size (in LBAs): 131072 (0GiB) 00:18:47.744 Capacity (in LBAs): 131072 (0GiB) 00:18:47.744 Utilization (in LBAs): 131072 (0GiB) 00:18:47.744 NGUID: ABCDEF0123456789ABCDEF0123456789 00:18:47.744 EUI64: ABCDEF0123456789 00:18:47.744 UUID: e1a70880-be53-4738-a7ed-08443a8d4258 00:18:47.744 Thin Provisioning: Not Supported 00:18:47.744 Per-NS Atomic Units: Yes 00:18:47.744 Atomic Boundary Size (Normal): 0 00:18:47.744 Atomic Boundary Size (PFail): 0 00:18:47.744 Atomic Boundary Offset: 0 00:18:47.744 Maximum Single Source Range Length: 65535 00:18:47.744 Maximum Copy Length: 65535 00:18:47.744 Maximum Source Range Count: 1 00:18:47.744 NGUID/EUI64 Never Reused: No 00:18:47.744 Namespace Write Protected: No 00:18:47.744 Number of LBA Formats: 1 00:18:47.744 Current LBA Format: LBA Format #00 00:18:47.744 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:47.744 00:18:47.744 20:10:29 -- host/identify.sh@51 -- # sync 00:18:48.004 20:10:29 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:48.004 20:10:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.004 20:10:29 -- common/autotest_common.sh@10 -- # set +x 00:18:48.004 20:10:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.004 20:10:29 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:18:48.004 20:10:29 -- host/identify.sh@56 -- # nvmftestfini 00:18:48.004 20:10:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:48.004 20:10:29 -- nvmf/common.sh@117 -- # sync 00:18:48.004 20:10:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:48.004 20:10:30 -- nvmf/common.sh@120 -- # set +e 00:18:48.004 20:10:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:48.004 20:10:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:48.004 rmmod nvme_tcp 00:18:48.004 rmmod nvme_fabrics 00:18:48.004 rmmod nvme_keyring 00:18:48.004 20:10:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:48.004 20:10:30 -- nvmf/common.sh@124 -- # set -e 00:18:48.004 20:10:30 -- nvmf/common.sh@125 -- # return 0 00:18:48.004 20:10:30 -- nvmf/common.sh@478 -- # '[' -n 71631 ']' 00:18:48.004 20:10:30 -- nvmf/common.sh@479 -- # killprocess 71631 00:18:48.004 20:10:30 -- common/autotest_common.sh@936 -- # '[' -z 71631 ']' 00:18:48.004 20:10:30 -- common/autotest_common.sh@940 -- # kill -0 71631 00:18:48.004 20:10:30 -- common/autotest_common.sh@941 -- # uname 00:18:48.004 20:10:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:48.004 20:10:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71631 00:18:48.004 20:10:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:48.004 20:10:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:48.004 killing process with pid 71631 00:18:48.004 20:10:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71631' 00:18:48.004 20:10:30 -- common/autotest_common.sh@955 -- # kill 71631 00:18:48.004 [2024-04-24 20:10:30.069613] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:18:48.004 [2024-04-24 20:10:30.069645] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:48.004 20:10:30 -- common/autotest_common.sh@960 -- # wait 71631 00:18:48.264 20:10:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:48.264 20:10:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:48.264 20:10:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:48.264 20:10:30 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:48.264 20:10:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:48.264 20:10:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.264 20:10:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.264 20:10:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.264 20:10:30 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:48.264 ************************************ 00:18:48.264 END TEST nvmf_identify 00:18:48.264 ************************************ 00:18:48.264 00:18:48.264 real 0m2.434s 00:18:48.264 user 0m6.322s 00:18:48.264 sys 0m0.649s 00:18:48.264 20:10:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:48.264 20:10:30 -- common/autotest_common.sh@10 -- # set +x 00:18:48.264 20:10:30 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:48.264 20:10:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:48.264 20:10:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:48.264 20:10:30 -- common/autotest_common.sh@10 -- # set +x 00:18:48.524 ************************************ 00:18:48.524 START TEST nvmf_perf 00:18:48.524 ************************************ 00:18:48.524 20:10:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:48.524 * Looking for test storage... 00:18:48.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:48.524 20:10:30 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:48.524 20:10:30 -- nvmf/common.sh@7 -- # uname -s 00:18:48.524 20:10:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.524 20:10:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.524 20:10:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.524 20:10:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.524 20:10:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.524 20:10:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.524 20:10:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.524 20:10:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.524 20:10:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.524 20:10:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.524 20:10:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:18:48.524 20:10:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:18:48.524 20:10:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.524 20:10:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.524 20:10:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:48.524 20:10:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.524 20:10:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:48.524 20:10:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.524 20:10:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.524 20:10:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.524 20:10:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.524 20:10:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.524 20:10:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.524 20:10:30 -- paths/export.sh@5 -- # export PATH 00:18:48.524 20:10:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.524 20:10:30 -- nvmf/common.sh@47 -- # : 0 00:18:48.524 20:10:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:48.524 20:10:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:48.524 20:10:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.524 20:10:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.524 20:10:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.524 20:10:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:48.524 20:10:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:48.524 20:10:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:48.524 20:10:30 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:48.524 20:10:30 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:48.524 20:10:30 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:48.524 20:10:30 -- host/perf.sh@17 -- # nvmftestinit 00:18:48.524 20:10:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:48.524 20:10:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.524 20:10:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:48.524 20:10:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:48.524 20:10:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:48.524 20:10:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.524 20:10:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.524 20:10:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.524 20:10:30 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:48.524 20:10:30 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:48.524 20:10:30 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:48.524 20:10:30 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:48.524 20:10:30 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:48.524 20:10:30 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:48.524 20:10:30 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.524 20:10:30 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.525 20:10:30 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:48.525 20:10:30 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:48.525 20:10:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:48.525 20:10:30 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:48.525 20:10:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:48.525 20:10:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.525 20:10:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:48.525 20:10:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:48.525 20:10:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:48.525 20:10:30 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:48.525 20:10:30 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:48.525 20:10:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:48.525 Cannot find device "nvmf_tgt_br" 00:18:48.525 20:10:30 -- nvmf/common.sh@155 -- # true 00:18:48.525 20:10:30 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:48.525 Cannot find device "nvmf_tgt_br2" 00:18:48.525 20:10:30 -- nvmf/common.sh@156 -- # true 00:18:48.525 20:10:30 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:48.525 20:10:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:48.785 Cannot find device "nvmf_tgt_br" 00:18:48.785 20:10:30 -- nvmf/common.sh@158 -- # true 00:18:48.785 20:10:30 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:48.785 Cannot find device "nvmf_tgt_br2" 00:18:48.785 20:10:30 -- nvmf/common.sh@159 -- # true 00:18:48.785 20:10:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:48.785 20:10:30 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:48.785 20:10:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:48.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:48.785 20:10:30 -- nvmf/common.sh@162 -- # true 00:18:48.785 20:10:30 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:48.785 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:48.785 20:10:30 -- nvmf/common.sh@163 -- # true 00:18:48.785 20:10:30 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:48.785 20:10:30 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:48.785 20:10:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:48.785 20:10:30 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:48.785 20:10:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:48.785 20:10:30 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:48.785 20:10:30 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:48.785 20:10:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:48.785 20:10:30 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:48.785 20:10:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:48.785 20:10:30 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:48.785 20:10:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:48.785 20:10:30 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:48.785 20:10:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:48.785 20:10:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:48.785 20:10:31 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:48.785 20:10:31 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:48.785 20:10:31 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:48.785 20:10:31 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:48.785 20:10:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:49.045 20:10:31 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:49.045 20:10:31 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:49.045 20:10:31 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:49.045 20:10:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:49.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:18:49.045 00:18:49.045 --- 10.0.0.2 ping statistics --- 00:18:49.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.045 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:18:49.045 20:10:31 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:49.045 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:49.045 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.138 ms 00:18:49.045 00:18:49.045 --- 10.0.0.3 ping statistics --- 00:18:49.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.045 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:18:49.045 20:10:31 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:49.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:18:49.045 00:18:49.045 --- 10.0.0.1 ping statistics --- 00:18:49.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.045 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:49.045 20:10:31 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.045 20:10:31 -- nvmf/common.sh@422 -- # return 0 00:18:49.045 20:10:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:49.045 20:10:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.045 20:10:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:49.045 20:10:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:49.045 20:10:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.045 20:10:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:49.045 20:10:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:49.045 20:10:31 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:49.045 20:10:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:49.045 20:10:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:49.045 20:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:49.045 20:10:31 -- nvmf/common.sh@470 -- # nvmfpid=71849 00:18:49.045 20:10:31 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:49.045 20:10:31 -- nvmf/common.sh@471 -- # waitforlisten 71849 00:18:49.045 20:10:31 -- common/autotest_common.sh@817 -- # '[' -z 71849 ']' 00:18:49.045 20:10:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.045 20:10:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:49.045 20:10:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.045 20:10:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:49.045 20:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:49.045 [2024-04-24 20:10:31.189607] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:18:49.045 [2024-04-24 20:10:31.189667] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.305 [2024-04-24 20:10:31.312452] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.305 [2024-04-24 20:10:31.403398] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.305 [2024-04-24 20:10:31.403616] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.305 [2024-04-24 20:10:31.403662] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.305 [2024-04-24 20:10:31.403704] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.305 [2024-04-24 20:10:31.403739] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.305 [2024-04-24 20:10:31.404211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.305 [2024-04-24 20:10:31.403983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.305 [2024-04-24 20:10:31.404213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.305 [2024-04-24 20:10:31.404090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.874 20:10:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:49.874 20:10:32 -- common/autotest_common.sh@850 -- # return 0 00:18:49.874 20:10:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:49.874 20:10:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:49.874 20:10:32 -- common/autotest_common.sh@10 -- # set +x 00:18:49.874 20:10:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.133 20:10:32 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:50.133 20:10:32 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:18:50.391 20:10:32 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:18:50.391 20:10:32 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:50.649 20:10:32 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:18:50.649 20:10:32 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:50.907 20:10:32 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:50.907 20:10:32 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:18:50.907 20:10:32 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:50.907 20:10:32 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:18:50.907 20:10:32 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:50.907 [2024-04-24 20:10:33.123273] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.907 20:10:33 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:51.165 20:10:33 -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:51.165 20:10:33 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:51.424 20:10:33 -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:51.424 20:10:33 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:51.683 20:10:33 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.941 [2024-04-24 20:10:33.974465] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:51.941 [2024-04-24 20:10:33.974908] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.941 20:10:33 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:51.941 20:10:34 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:51.941 20:10:34 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:51.941 20:10:34 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:51.941 20:10:34 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:53.334 Initializing NVMe Controllers 00:18:53.334 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:53.334 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:53.334 Initialization complete. Launching workers. 00:18:53.334 ======================================================== 00:18:53.334 Latency(us) 00:18:53.334 Device Information : IOPS MiB/s Average min max 00:18:53.334 PCIE (0000:00:10.0) NSID 1 from core 0: 19907.00 77.76 1613.96 268.92 7266.67 00:18:53.334 ======================================================== 00:18:53.334 Total : 19907.00 77.76 1613.96 268.92 7266.67 00:18:53.334 00:18:53.334 20:10:35 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:54.708 Initializing NVMe Controllers 00:18:54.708 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:54.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:54.708 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:54.708 Initialization complete. Launching workers. 00:18:54.708 ======================================================== 00:18:54.708 Latency(us) 00:18:54.708 Device Information : IOPS MiB/s Average min max 00:18:54.708 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4248.92 16.60 235.11 77.25 4263.13 00:18:54.708 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8103.47 6061.72 12057.58 00:18:54.708 ======================================================== 00:18:54.708 Total : 4372.91 17.08 458.22 77.25 12057.58 00:18:54.708 00:18:54.708 20:10:36 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:56.084 Initializing NVMe Controllers 00:18:56.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:56.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:56.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:56.084 Initialization complete. Launching workers. 00:18:56.084 ======================================================== 00:18:56.084 Latency(us) 00:18:56.084 Device Information : IOPS MiB/s Average min max 00:18:56.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9191.63 35.90 3481.38 426.69 13862.26 00:18:56.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3865.85 15.10 8324.37 6588.72 24052.88 00:18:56.084 ======================================================== 00:18:56.084 Total : 13057.48 51.01 4915.21 426.69 24052.88 00:18:56.084 00:18:56.084 20:10:37 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:18:56.084 20:10:37 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:18:58.657 Initializing NVMe Controllers 00:18:58.657 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:58.657 Controller IO queue size 128, less than required. 00:18:58.657 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:58.657 Controller IO queue size 128, less than required. 00:18:58.657 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:58.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:58.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:58.657 Initialization complete. Launching workers. 00:18:58.657 ======================================================== 00:18:58.657 Latency(us) 00:18:58.657 Device Information : IOPS MiB/s Average min max 00:18:58.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2092.00 523.00 62169.16 39208.69 105689.47 00:18:58.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 665.50 166.37 200936.63 92992.02 318933.73 00:18:58.657 ======================================================== 00:18:58.657 Total : 2757.49 689.37 95659.55 39208.69 318933.73 00:18:58.657 00:18:58.657 20:10:40 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:18:58.657 No valid NVMe controllers or AIO or URING devices found 00:18:58.657 Initializing NVMe Controllers 00:18:58.657 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:58.657 Controller IO queue size 128, less than required. 00:18:58.657 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:58.657 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:58.657 Controller IO queue size 128, less than required. 00:18:58.657 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:58.657 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:58.657 WARNING: Some requested NVMe devices were skipped 00:18:58.657 20:10:40 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:19:01.193 Initializing NVMe Controllers 00:19:01.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:01.193 Controller IO queue size 128, less than required. 00:19:01.193 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:01.193 Controller IO queue size 128, less than required. 00:19:01.193 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:01.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:01.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:01.194 Initialization complete. Launching workers. 00:19:01.194 00:19:01.194 ==================== 00:19:01.194 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:19:01.194 TCP transport: 00:19:01.194 polls: 12783 00:19:01.194 idle_polls: 0 00:19:01.194 sock_completions: 12783 00:19:01.194 nvme_completions: 7891 00:19:01.194 submitted_requests: 11840 00:19:01.194 queued_requests: 1 00:19:01.194 00:19:01.194 ==================== 00:19:01.194 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:19:01.194 TCP transport: 00:19:01.194 polls: 12900 00:19:01.194 idle_polls: 0 00:19:01.194 sock_completions: 12900 00:19:01.194 nvme_completions: 7097 00:19:01.194 submitted_requests: 10516 00:19:01.194 queued_requests: 1 00:19:01.194 ======================================================== 00:19:01.194 Latency(us) 00:19:01.194 Device Information : IOPS MiB/s Average min max 00:19:01.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1972.22 493.05 66012.46 32362.49 116856.87 00:19:01.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1773.75 443.44 73000.04 33738.47 138462.48 00:19:01.194 ======================================================== 00:19:01.194 Total : 3745.96 936.49 69321.14 32362.49 138462.48 00:19:01.194 00:19:01.194 20:10:43 -- host/perf.sh@66 -- # sync 00:19:01.194 20:10:43 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:01.194 20:10:43 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:19:01.194 20:10:43 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:01.194 20:10:43 -- host/perf.sh@114 -- # nvmftestfini 00:19:01.194 20:10:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:01.194 20:10:43 -- nvmf/common.sh@117 -- # sync 00:19:01.194 20:10:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:01.194 20:10:43 -- nvmf/common.sh@120 -- # set +e 00:19:01.194 20:10:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:01.194 20:10:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:01.194 rmmod nvme_tcp 00:19:01.194 rmmod nvme_fabrics 00:19:01.194 rmmod nvme_keyring 00:19:01.194 20:10:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:01.453 20:10:43 -- nvmf/common.sh@124 -- # set -e 00:19:01.453 20:10:43 -- nvmf/common.sh@125 -- # return 0 00:19:01.453 20:10:43 -- nvmf/common.sh@478 -- # '[' -n 71849 ']' 00:19:01.453 20:10:43 -- nvmf/common.sh@479 -- # killprocess 71849 00:19:01.453 20:10:43 -- common/autotest_common.sh@936 -- # '[' -z 71849 ']' 00:19:01.453 20:10:43 -- common/autotest_common.sh@940 -- # kill -0 71849 00:19:01.453 20:10:43 -- common/autotest_common.sh@941 -- # uname 00:19:01.453 20:10:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:01.453 20:10:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71849 00:19:01.453 killing process with pid 71849 00:19:01.453 20:10:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:01.453 20:10:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:01.453 20:10:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71849' 00:19:01.453 20:10:43 -- common/autotest_common.sh@955 -- # kill 71849 00:19:01.453 [2024-04-24 20:10:43.478652] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:01.453 20:10:43 -- common/autotest_common.sh@960 -- # wait 71849 00:19:02.832 20:10:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:02.832 20:10:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:02.832 20:10:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:02.832 20:10:44 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:02.832 20:10:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:02.832 20:10:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.832 20:10:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.832 20:10:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.832 20:10:44 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:02.832 00:19:02.832 real 0m14.447s 00:19:02.832 user 0m52.887s 00:19:02.832 sys 0m3.734s 00:19:02.832 20:10:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:02.832 20:10:45 -- common/autotest_common.sh@10 -- # set +x 00:19:02.832 ************************************ 00:19:02.832 END TEST nvmf_perf 00:19:02.832 ************************************ 00:19:02.832 20:10:45 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:02.832 20:10:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:02.832 20:10:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:02.832 20:10:45 -- common/autotest_common.sh@10 -- # set +x 00:19:03.092 ************************************ 00:19:03.093 START TEST nvmf_fio_host 00:19:03.093 ************************************ 00:19:03.093 20:10:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:03.093 * Looking for test storage... 00:19:03.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:03.093 20:10:45 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:03.093 20:10:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.093 20:10:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.093 20:10:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.093 20:10:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.093 20:10:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.093 20:10:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.093 20:10:45 -- paths/export.sh@5 -- # export PATH 00:19:03.093 20:10:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.093 20:10:45 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:03.093 20:10:45 -- nvmf/common.sh@7 -- # uname -s 00:19:03.093 20:10:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.093 20:10:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.093 20:10:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.093 20:10:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.093 20:10:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.093 20:10:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.093 20:10:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.093 20:10:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.093 20:10:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.093 20:10:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.093 20:10:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:19:03.093 20:10:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:19:03.093 20:10:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.093 20:10:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.093 20:10:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:03.093 20:10:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.093 20:10:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:03.093 20:10:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.093 20:10:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.093 20:10:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.093 20:10:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.093 20:10:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.093 20:10:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.093 20:10:45 -- paths/export.sh@5 -- # export PATH 00:19:03.093 20:10:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.093 20:10:45 -- nvmf/common.sh@47 -- # : 0 00:19:03.093 20:10:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:03.093 20:10:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:03.093 20:10:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.093 20:10:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.093 20:10:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.093 20:10:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:03.093 20:10:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:03.093 20:10:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:03.093 20:10:45 -- host/fio.sh@12 -- # nvmftestinit 00:19:03.093 20:10:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:03.093 20:10:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.093 20:10:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:03.093 20:10:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:03.093 20:10:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:03.093 20:10:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.093 20:10:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.093 20:10:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.093 20:10:45 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:03.093 20:10:45 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:03.093 20:10:45 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:03.093 20:10:45 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:03.093 20:10:45 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:03.093 20:10:45 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:03.093 20:10:45 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:03.093 20:10:45 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:03.093 20:10:45 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:03.093 20:10:45 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:03.093 20:10:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:03.093 20:10:45 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:03.093 20:10:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:03.093 20:10:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:03.093 20:10:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:03.093 20:10:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:03.093 20:10:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:03.093 20:10:45 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:03.093 20:10:45 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:03.352 20:10:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:03.352 Cannot find device "nvmf_tgt_br" 00:19:03.352 20:10:45 -- nvmf/common.sh@155 -- # true 00:19:03.352 20:10:45 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:03.352 Cannot find device "nvmf_tgt_br2" 00:19:03.352 20:10:45 -- nvmf/common.sh@156 -- # true 00:19:03.352 20:10:45 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:03.352 20:10:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:03.352 Cannot find device "nvmf_tgt_br" 00:19:03.352 20:10:45 -- nvmf/common.sh@158 -- # true 00:19:03.352 20:10:45 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:03.352 Cannot find device "nvmf_tgt_br2" 00:19:03.352 20:10:45 -- nvmf/common.sh@159 -- # true 00:19:03.352 20:10:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:03.352 20:10:45 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:03.352 20:10:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:03.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:03.352 20:10:45 -- nvmf/common.sh@162 -- # true 00:19:03.352 20:10:45 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:03.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:03.352 20:10:45 -- nvmf/common.sh@163 -- # true 00:19:03.352 20:10:45 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:03.352 20:10:45 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:03.352 20:10:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:03.352 20:10:45 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:03.352 20:10:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:03.352 20:10:45 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:03.352 20:10:45 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:03.352 20:10:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:03.352 20:10:45 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:03.352 20:10:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:03.352 20:10:45 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:03.352 20:10:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:03.352 20:10:45 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:03.352 20:10:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:03.352 20:10:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:03.352 20:10:45 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:03.352 20:10:45 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:03.352 20:10:45 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:03.352 20:10:45 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:03.352 20:10:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:03.610 20:10:45 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:03.610 20:10:45 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:03.610 20:10:45 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:03.610 20:10:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:03.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:19:03.610 00:19:03.610 --- 10.0.0.2 ping statistics --- 00:19:03.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.610 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:19:03.610 20:10:45 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:03.610 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:03.610 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:19:03.610 00:19:03.610 --- 10.0.0.3 ping statistics --- 00:19:03.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.610 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:19:03.610 20:10:45 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:03.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:03.611 00:19:03.611 --- 10.0.0.1 ping statistics --- 00:19:03.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.611 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:03.611 20:10:45 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.611 20:10:45 -- nvmf/common.sh@422 -- # return 0 00:19:03.611 20:10:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:03.611 20:10:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.611 20:10:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:03.611 20:10:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:03.611 20:10:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.611 20:10:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:03.611 20:10:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:03.611 20:10:45 -- host/fio.sh@14 -- # [[ y != y ]] 00:19:03.611 20:10:45 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:19:03.611 20:10:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:03.611 20:10:45 -- common/autotest_common.sh@10 -- # set +x 00:19:03.611 20:10:45 -- host/fio.sh@22 -- # nvmfpid=72258 00:19:03.611 20:10:45 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:03.611 20:10:45 -- host/fio.sh@26 -- # waitforlisten 72258 00:19:03.611 20:10:45 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:03.611 20:10:45 -- common/autotest_common.sh@817 -- # '[' -z 72258 ']' 00:19:03.611 20:10:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.611 20:10:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:03.611 20:10:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.611 20:10:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:03.611 20:10:45 -- common/autotest_common.sh@10 -- # set +x 00:19:03.611 [2024-04-24 20:10:45.706472] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:03.611 [2024-04-24 20:10:45.706539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.611 [2024-04-24 20:10:45.848855] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:03.871 [2024-04-24 20:10:45.948681] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.871 [2024-04-24 20:10:45.948728] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.871 [2024-04-24 20:10:45.948751] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.871 [2024-04-24 20:10:45.948755] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.871 [2024-04-24 20:10:45.948760] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.871 [2024-04-24 20:10:45.948968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.871 [2024-04-24 20:10:45.949530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.871 [2024-04-24 20:10:45.949717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.871 [2024-04-24 20:10:45.949721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:04.440 20:10:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:04.440 20:10:46 -- common/autotest_common.sh@850 -- # return 0 00:19:04.440 20:10:46 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:04.440 20:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.440 20:10:46 -- common/autotest_common.sh@10 -- # set +x 00:19:04.440 [2024-04-24 20:10:46.619337] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.440 20:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.440 20:10:46 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:19:04.440 20:10:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:04.440 20:10:46 -- common/autotest_common.sh@10 -- # set +x 00:19:04.440 20:10:46 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:04.440 20:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.440 20:10:46 -- common/autotest_common.sh@10 -- # set +x 00:19:04.701 Malloc1 00:19:04.701 20:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.701 20:10:46 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:04.701 20:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.701 20:10:46 -- common/autotest_common.sh@10 -- # set +x 00:19:04.701 20:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.701 20:10:46 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:04.701 20:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.701 20:10:46 -- common/autotest_common.sh@10 -- # set +x 00:19:04.701 20:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.701 20:10:46 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.701 20:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.701 20:10:46 -- common/autotest_common.sh@10 -- # set +x 00:19:04.701 [2024-04-24 20:10:46.729347] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:04.701 [2024-04-24 20:10:46.729583] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.701 20:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.701 20:10:46 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:04.701 20:10:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.701 20:10:46 -- common/autotest_common.sh@10 -- # set +x 00:19:04.701 20:10:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.701 20:10:46 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:19:04.701 20:10:46 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:04.701 20:10:46 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:04.701 20:10:46 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:04.701 20:10:46 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:04.701 20:10:46 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:04.701 20:10:46 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:04.701 20:10:46 -- common/autotest_common.sh@1327 -- # shift 00:19:04.701 20:10:46 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:04.701 20:10:46 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:04.701 20:10:46 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:04.701 20:10:46 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:04.701 20:10:46 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:04.701 20:10:46 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:04.701 20:10:46 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:04.701 20:10:46 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:04.701 20:10:46 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:04.701 20:10:46 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:04.701 20:10:46 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:04.701 20:10:46 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:04.701 20:10:46 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:04.701 20:10:46 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:04.701 20:10:46 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:19:04.701 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:04.701 fio-3.35 00:19:04.701 Starting 1 thread 00:19:07.234 00:19:07.234 test: (groupid=0, jobs=1): err= 0: pid=72322: Wed Apr 24 20:10:49 2024 00:19:07.234 read: IOPS=9470, BW=37.0MiB/s (38.8MB/s)(74.2MiB/2007msec) 00:19:07.234 slat (nsec): min=1598, max=424420, avg=2113.61, stdev=4096.14 00:19:07.234 clat (usec): min=3645, max=12609, avg=7051.05, stdev=546.23 00:19:07.234 lat (usec): min=3690, max=12611, avg=7053.17, stdev=546.34 00:19:07.234 clat percentiles (usec): 00:19:07.234 | 1.00th=[ 5735], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6652], 00:19:07.234 | 30.00th=[ 6783], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:19:07.234 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7635], 95.00th=[ 7832], 00:19:07.234 | 99.00th=[ 8356], 99.50th=[ 9110], 99.90th=[10421], 99.95th=[11469], 00:19:07.234 | 99.99th=[12649] 00:19:07.234 bw ( KiB/s): min=36432, max=38600, per=100.00%, avg=37896.00, stdev=993.74, samples=4 00:19:07.234 iops : min= 9108, max= 9650, avg=9474.00, stdev=248.44, samples=4 00:19:07.234 write: IOPS=9477, BW=37.0MiB/s (38.8MB/s)(74.3MiB/2007msec); 0 zone resets 00:19:07.234 slat (nsec): min=1652, max=323036, avg=2189.09, stdev=2757.09 00:19:07.234 clat (usec): min=3455, max=12160, avg=6401.30, stdev=504.58 00:19:07.234 lat (usec): min=3475, max=12162, avg=6403.49, stdev=504.83 00:19:07.234 clat percentiles (usec): 00:19:07.234 | 1.00th=[ 5145], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:19:07.234 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6521], 00:19:07.234 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7111], 00:19:07.234 | 99.00th=[ 7701], 99.50th=[ 8586], 99.90th=[ 9765], 99.95th=[11338], 00:19:07.234 | 99.99th=[12125] 00:19:07.234 bw ( KiB/s): min=37384, max=38528, per=99.99%, avg=37906.00, stdev=526.07, samples=4 00:19:07.234 iops : min= 9346, max= 9632, avg=9476.50, stdev=131.52, samples=4 00:19:07.234 lat (msec) : 4=0.02%, 10=99.83%, 20=0.14% 00:19:07.234 cpu : usr=75.52%, sys=18.94%, ctx=8, majf=0, minf=5 00:19:07.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:07.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:07.234 issued rwts: total=19007,19021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.234 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:07.234 00:19:07.234 Run status group 0 (all jobs): 00:19:07.234 READ: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=74.2MiB (77.9MB), run=2007-2007msec 00:19:07.234 WRITE: bw=37.0MiB/s (38.8MB/s), 37.0MiB/s-37.0MiB/s (38.8MB/s-38.8MB/s), io=74.3MiB (77.9MB), run=2007-2007msec 00:19:07.234 20:10:49 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:19:07.235 20:10:49 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:19:07.235 20:10:49 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:07.235 20:10:49 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:07.235 20:10:49 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:07.235 20:10:49 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:07.235 20:10:49 -- common/autotest_common.sh@1327 -- # shift 00:19:07.235 20:10:49 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:07.235 20:10:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:07.235 20:10:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:07.235 20:10:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:07.235 20:10:49 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:07.235 20:10:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:07.235 20:10:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:07.235 20:10:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:07.235 20:10:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:07.235 20:10:49 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:07.235 20:10:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:07.235 20:10:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:07.235 20:10:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:07.235 20:10:49 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:07.235 20:10:49 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:19:07.235 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:19:07.235 fio-3.35 00:19:07.235 Starting 1 thread 00:19:09.813 00:19:09.814 test: (groupid=0, jobs=1): err= 0: pid=72365: Wed Apr 24 20:10:51 2024 00:19:09.814 read: IOPS=8788, BW=137MiB/s (144MB/s)(276MiB/2007msec) 00:19:09.814 slat (usec): min=2, max=171, avg= 3.43, stdev= 2.28 00:19:09.814 clat (usec): min=1900, max=23831, avg=8183.10, stdev=2927.78 00:19:09.814 lat (usec): min=1903, max=23834, avg=8186.54, stdev=2928.12 00:19:09.814 clat percentiles (usec): 00:19:09.814 | 1.00th=[ 3490], 5.00th=[ 4228], 10.00th=[ 4817], 20.00th=[ 5669], 00:19:09.814 | 30.00th=[ 6325], 40.00th=[ 6980], 50.00th=[ 7701], 60.00th=[ 8586], 00:19:09.814 | 70.00th=[ 9372], 80.00th=[10552], 90.00th=[12125], 95.00th=[13566], 00:19:09.814 | 99.00th=[16188], 99.50th=[18744], 99.90th=[21627], 99.95th=[22152], 00:19:09.814 | 99.99th=[22676] 00:19:09.814 bw ( KiB/s): min=59360, max=85536, per=49.77%, avg=69984.00, stdev=11141.73, samples=4 00:19:09.814 iops : min= 3710, max= 5346, avg=4374.00, stdev=696.36, samples=4 00:19:09.814 write: IOPS=5037, BW=78.7MiB/s (82.5MB/s)(142MiB/1808msec); 0 zone resets 00:19:09.814 slat (usec): min=28, max=348, avg=37.41, stdev=13.95 00:19:09.814 clat (usec): min=5182, max=29433, avg=11396.09, stdev=3063.81 00:19:09.814 lat (usec): min=5212, max=29469, avg=11433.50, stdev=3069.29 00:19:09.814 clat percentiles (usec): 00:19:09.814 | 1.00th=[ 6718], 5.00th=[ 7701], 10.00th=[ 8455], 20.00th=[ 9110], 00:19:09.814 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10814], 60.00th=[11469], 00:19:09.814 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14484], 95.00th=[16581], 00:19:09.814 | 99.00th=[23987], 99.50th=[25297], 99.90th=[27657], 99.95th=[27919], 00:19:09.814 | 99.99th=[29492] 00:19:09.814 bw ( KiB/s): min=62272, max=88608, per=90.40%, avg=72864.00, stdev=11298.16, samples=4 00:19:09.814 iops : min= 3892, max= 5538, avg=4554.00, stdev=706.14, samples=4 00:19:09.814 lat (msec) : 2=0.01%, 4=2.18%, 10=58.64%, 20=38.08%, 50=1.09% 00:19:09.814 cpu : usr=82.70%, sys=13.61%, ctx=5, majf=0, minf=21 00:19:09.814 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:09.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.814 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.814 issued rwts: total=17638,9108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.814 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.814 00:19:09.814 Run status group 0 (all jobs): 00:19:09.814 READ: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=276MiB (289MB), run=2007-2007msec 00:19:09.814 WRITE: bw=78.7MiB/s (82.5MB/s), 78.7MiB/s-78.7MiB/s (82.5MB/s-82.5MB/s), io=142MiB (149MB), run=1808-1808msec 00:19:09.814 20:10:51 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.814 20:10:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.814 20:10:51 -- common/autotest_common.sh@10 -- # set +x 00:19:09.814 20:10:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.814 20:10:51 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:19:09.814 20:10:51 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:19:09.814 20:10:51 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:19:09.814 20:10:51 -- host/fio.sh@84 -- # nvmftestfini 00:19:09.814 20:10:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:09.814 20:10:51 -- nvmf/common.sh@117 -- # sync 00:19:09.814 20:10:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:09.814 20:10:51 -- nvmf/common.sh@120 -- # set +e 00:19:09.814 20:10:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:09.814 20:10:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:09.814 rmmod nvme_tcp 00:19:09.814 rmmod nvme_fabrics 00:19:09.814 rmmod nvme_keyring 00:19:09.814 20:10:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:09.814 20:10:51 -- nvmf/common.sh@124 -- # set -e 00:19:09.814 20:10:51 -- nvmf/common.sh@125 -- # return 0 00:19:09.814 20:10:51 -- nvmf/common.sh@478 -- # '[' -n 72258 ']' 00:19:09.814 20:10:51 -- nvmf/common.sh@479 -- # killprocess 72258 00:19:09.814 20:10:51 -- common/autotest_common.sh@936 -- # '[' -z 72258 ']' 00:19:09.814 20:10:51 -- common/autotest_common.sh@940 -- # kill -0 72258 00:19:09.814 20:10:51 -- common/autotest_common.sh@941 -- # uname 00:19:09.814 20:10:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:09.814 20:10:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72258 00:19:09.814 20:10:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:09.814 20:10:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:09.814 20:10:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72258' 00:19:09.814 killing process with pid 72258 00:19:09.814 20:10:51 -- common/autotest_common.sh@955 -- # kill 72258 00:19:09.814 [2024-04-24 20:10:51.873947] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:09.814 20:10:51 -- common/autotest_common.sh@960 -- # wait 72258 00:19:10.074 20:10:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:10.074 20:10:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:10.074 20:10:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:10.074 20:10:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:10.074 20:10:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:10.074 20:10:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.074 20:10:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.074 20:10:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.074 20:10:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:10.074 00:19:10.074 real 0m7.044s 00:19:10.074 user 0m27.631s 00:19:10.074 sys 0m1.944s 00:19:10.074 20:10:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:10.074 20:10:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.074 ************************************ 00:19:10.074 END TEST nvmf_fio_host 00:19:10.074 ************************************ 00:19:10.074 20:10:52 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:10.074 20:10:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:10.074 20:10:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:10.074 20:10:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.334 ************************************ 00:19:10.334 START TEST nvmf_failover 00:19:10.334 ************************************ 00:19:10.334 20:10:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:10.334 * Looking for test storage... 00:19:10.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:10.334 20:10:52 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:10.334 20:10:52 -- nvmf/common.sh@7 -- # uname -s 00:19:10.334 20:10:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.334 20:10:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.334 20:10:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.334 20:10:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.334 20:10:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.334 20:10:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.334 20:10:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.334 20:10:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.334 20:10:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.334 20:10:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.334 20:10:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:19:10.334 20:10:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:19:10.334 20:10:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.334 20:10:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.334 20:10:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:10.334 20:10:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.334 20:10:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:10.334 20:10:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.334 20:10:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.334 20:10:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.334 20:10:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.334 20:10:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.334 20:10:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.334 20:10:52 -- paths/export.sh@5 -- # export PATH 00:19:10.334 20:10:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.334 20:10:52 -- nvmf/common.sh@47 -- # : 0 00:19:10.334 20:10:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:10.334 20:10:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:10.334 20:10:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.334 20:10:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.334 20:10:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.334 20:10:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:10.334 20:10:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:10.334 20:10:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:10.334 20:10:52 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:10.334 20:10:52 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:10.334 20:10:52 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:10.334 20:10:52 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:10.334 20:10:52 -- host/failover.sh@18 -- # nvmftestinit 00:19:10.334 20:10:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:10.334 20:10:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.335 20:10:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:10.335 20:10:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:10.335 20:10:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:10.335 20:10:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.335 20:10:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.335 20:10:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.335 20:10:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:10.335 20:10:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:10.335 20:10:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:10.335 20:10:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:10.335 20:10:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:10.335 20:10:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:10.335 20:10:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.335 20:10:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:10.335 20:10:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:10.335 20:10:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:10.335 20:10:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:10.335 20:10:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:10.335 20:10:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:10.335 20:10:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.335 20:10:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:10.335 20:10:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:10.335 20:10:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:10.335 20:10:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:10.335 20:10:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:10.335 20:10:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:10.335 Cannot find device "nvmf_tgt_br" 00:19:10.335 20:10:52 -- nvmf/common.sh@155 -- # true 00:19:10.335 20:10:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:10.335 Cannot find device "nvmf_tgt_br2" 00:19:10.335 20:10:52 -- nvmf/common.sh@156 -- # true 00:19:10.335 20:10:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:10.335 20:10:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:10.335 Cannot find device "nvmf_tgt_br" 00:19:10.335 20:10:52 -- nvmf/common.sh@158 -- # true 00:19:10.335 20:10:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:10.594 Cannot find device "nvmf_tgt_br2" 00:19:10.594 20:10:52 -- nvmf/common.sh@159 -- # true 00:19:10.594 20:10:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:10.594 20:10:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:10.594 20:10:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:10.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.594 20:10:52 -- nvmf/common.sh@162 -- # true 00:19:10.594 20:10:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:10.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.594 20:10:52 -- nvmf/common.sh@163 -- # true 00:19:10.594 20:10:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:10.594 20:10:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:10.594 20:10:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:10.594 20:10:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:10.594 20:10:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:10.594 20:10:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:10.594 20:10:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:10.594 20:10:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:10.595 20:10:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:10.595 20:10:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:10.595 20:10:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:10.595 20:10:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:10.595 20:10:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:10.595 20:10:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:10.595 20:10:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:10.595 20:10:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:10.595 20:10:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:10.595 20:10:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:10.595 20:10:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:10.595 20:10:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:10.595 20:10:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:10.595 20:10:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:10.595 20:10:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:10.595 20:10:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:10.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:19:10.595 00:19:10.595 --- 10.0.0.2 ping statistics --- 00:19:10.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.595 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:10.595 20:10:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:10.595 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:10.595 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:19:10.595 00:19:10.595 --- 10.0.0.3 ping statistics --- 00:19:10.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.595 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:10.595 20:10:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:10.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:19:10.595 00:19:10.595 --- 10.0.0.1 ping statistics --- 00:19:10.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.595 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:19:10.595 20:10:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.595 20:10:52 -- nvmf/common.sh@422 -- # return 0 00:19:10.595 20:10:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:10.595 20:10:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.595 20:10:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:10.595 20:10:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:10.595 20:10:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.595 20:10:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:10.595 20:10:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:10.595 20:10:52 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:10.595 20:10:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:10.595 20:10:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:10.595 20:10:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.595 20:10:52 -- nvmf/common.sh@470 -- # nvmfpid=72584 00:19:10.595 20:10:52 -- nvmf/common.sh@471 -- # waitforlisten 72584 00:19:10.595 20:10:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:10.595 20:10:52 -- common/autotest_common.sh@817 -- # '[' -z 72584 ']' 00:19:10.595 20:10:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.854 20:10:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:10.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.854 20:10:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.854 20:10:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:10.854 20:10:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.854 [2024-04-24 20:10:52.898485] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:10.854 [2024-04-24 20:10:52.898557] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.854 [2024-04-24 20:10:53.039061] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:11.113 [2024-04-24 20:10:53.137780] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.113 [2024-04-24 20:10:53.137832] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.113 [2024-04-24 20:10:53.137855] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.113 [2024-04-24 20:10:53.137861] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.113 [2024-04-24 20:10:53.137865] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.113 [2024-04-24 20:10:53.138490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.113 [2024-04-24 20:10:53.138664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.113 [2024-04-24 20:10:53.138667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.682 20:10:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:11.682 20:10:53 -- common/autotest_common.sh@850 -- # return 0 00:19:11.682 20:10:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:11.682 20:10:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:11.682 20:10:53 -- common/autotest_common.sh@10 -- # set +x 00:19:11.682 20:10:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.682 20:10:53 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:11.941 [2024-04-24 20:10:53.952597] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.941 20:10:53 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:11.941 Malloc0 00:19:12.201 20:10:54 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:12.201 20:10:54 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:12.460 20:10:54 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:12.719 [2024-04-24 20:10:54.770714] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:12.719 [2024-04-24 20:10:54.770950] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.719 20:10:54 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:12.719 [2024-04-24 20:10:54.958698] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:12.977 20:10:54 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:12.977 [2024-04-24 20:10:55.130597] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:19:12.977 20:10:55 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:12.977 20:10:55 -- host/failover.sh@31 -- # bdevperf_pid=72636 00:19:12.977 20:10:55 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:12.977 20:10:55 -- host/failover.sh@34 -- # waitforlisten 72636 /var/tmp/bdevperf.sock 00:19:12.978 20:10:55 -- common/autotest_common.sh@817 -- # '[' -z 72636 ']' 00:19:12.978 20:10:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.978 20:10:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:12.978 20:10:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.978 20:10:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:12.978 20:10:55 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 20:10:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:13.921 20:10:56 -- common/autotest_common.sh@850 -- # return 0 00:19:13.921 20:10:56 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:14.180 NVMe0n1 00:19:14.180 20:10:56 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:14.439 00:19:14.439 20:10:56 -- host/failover.sh@39 -- # run_test_pid=72654 00:19:14.439 20:10:56 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:14.439 20:10:56 -- host/failover.sh@41 -- # sleep 1 00:19:15.377 20:10:57 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:15.636 [2024-04-24 20:10:57.807009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703710 is same with the state(5) to be set 00:19:15.636 [2024-04-24 20:10:57.807061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703710 is same with the state(5) to be set 00:19:15.636 [2024-04-24 20:10:57.807068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703710 is same with the state(5) to be set 00:19:15.636 [2024-04-24 20:10:57.807074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703710 is same with the state(5) to be set 00:19:15.637 [2024-04-24 20:10:57.807081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703710 is same with the state(5) to be set 00:19:15.637 [2024-04-24 20:10:57.807086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703710 is same with the state(5) to be set 00:19:15.637 [2024-04-24 20:10:57.807092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703710 is same with the state(5) to be set 00:19:15.637 20:10:57 -- host/failover.sh@45 -- # sleep 3 00:19:18.931 20:11:00 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:18.931 00:19:18.931 20:11:01 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:19.190 [2024-04-24 20:11:01.344378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 [2024-04-24 20:11:01.344547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1703dd0 is same with the state(5) to be set 00:19:19.190 20:11:01 -- host/failover.sh@50 -- # sleep 3 00:19:22.475 20:11:04 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.475 [2024-04-24 20:11:04.563618] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.475 20:11:04 -- host/failover.sh@55 -- # sleep 1 00:19:23.407 20:11:05 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:23.665 [2024-04-24 20:11:05.780125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702760 is same with the state(5) to be set 00:19:23.665 [2024-04-24 20:11:05.780170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702760 is same with the state(5) to be set 00:19:23.665 [2024-04-24 20:11:05.780176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702760 is same with the state(5) to be set 00:19:23.665 [2024-04-24 20:11:05.780199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702760 is same with the state(5) to be set 00:19:23.665 [2024-04-24 20:11:05.780204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702760 is same with the state(5) to be set 00:19:23.665 [2024-04-24 20:11:05.780209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702760 is same with the state(5) to be set 00:19:23.665 [2024-04-24 20:11:05.780215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702760 is same with the state(5) to be set 00:19:23.665 [2024-04-24 20:11:05.780220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702760 is same with the state(5) to be set 00:19:23.665 [2024-04-24 20:11:05.780225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702760 is same with the state(5) to be set 00:19:23.665 [2024-04-24 20:11:05.780230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702760 is same with the state(5) to be set 00:19:23.665 [2024-04-24 20:11:05.780235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702760 is same with the state(5) to be set 00:19:23.665 [2024-04-24 20:11:05.780241] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702760 is same with the state(5) to be set 00:19:23.665 [2024-04-24 20:11:05.780246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702760 is same with the state(5) to be set 00:19:23.666 [2024-04-24 20:11:05.780252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702760 is same with the state(5) to be set 00:19:23.666 [2024-04-24 20:11:05.780257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1702760 is same with the state(5) to be set 00:19:23.666 20:11:05 -- host/failover.sh@59 -- # wait 72654 00:19:30.244 0 00:19:30.244 20:11:11 -- host/failover.sh@61 -- # killprocess 72636 00:19:30.244 20:11:11 -- common/autotest_common.sh@936 -- # '[' -z 72636 ']' 00:19:30.244 20:11:11 -- common/autotest_common.sh@940 -- # kill -0 72636 00:19:30.244 20:11:11 -- common/autotest_common.sh@941 -- # uname 00:19:30.244 20:11:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:30.244 20:11:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72636 00:19:30.244 20:11:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:30.244 killing process with pid 72636 00:19:30.244 20:11:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:30.244 20:11:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72636' 00:19:30.244 20:11:11 -- common/autotest_common.sh@955 -- # kill 72636 00:19:30.244 20:11:11 -- common/autotest_common.sh@960 -- # wait 72636 00:19:30.244 20:11:11 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:30.244 [2024-04-24 20:10:55.198113] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:30.244 [2024-04-24 20:10:55.198209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72636 ] 00:19:30.244 [2024-04-24 20:10:55.337902] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.244 [2024-04-24 20:10:55.436081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.244 Running I/O for 15 seconds... 00:19:30.244 [2024-04-24 20:10:57.807138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.244 [2024-04-24 20:10:57.807363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.244 [2024-04-24 20:10:57.807395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.244 [2024-04-24 20:10:57.807415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.244 [2024-04-24 20:10:57.807463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.244 [2024-04-24 20:10:57.807483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.244 [2024-04-24 20:10:57.807502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.244 [2024-04-24 20:10:57.807522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.244 [2024-04-24 20:10:57.807541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.244 [2024-04-24 20:10:57.807778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.244 [2024-04-24 20:10:57.807787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.807797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.807807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.807817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.807826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.807837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.807846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.807857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.807866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.807877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.807886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.807896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.807906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.807916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.807925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.807936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.807945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.807960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.807970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.807980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.807989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.245 [2024-04-24 20:10:57.808052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.245 [2024-04-24 20:10:57.808072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.245 [2024-04-24 20:10:57.808093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.245 [2024-04-24 20:10:57.808113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.245 [2024-04-24 20:10:57.808132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.245 [2024-04-24 20:10:57.808153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.245 [2024-04-24 20:10:57.808174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.245 [2024-04-24 20:10:57.808193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.245 [2024-04-24 20:10:57.808540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.245 [2024-04-24 20:10:57.808551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.245 [2024-04-24 20:10:57.808561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.246 [2024-04-24 20:10:57.808581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.246 [2024-04-24 20:10:57.808601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.246 [2024-04-24 20:10:57.808621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.246 [2024-04-24 20:10:57.808641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.246 [2024-04-24 20:10:57.808661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.246 [2024-04-24 20:10:57.808680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.246 [2024-04-24 20:10:57.808700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.808720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.808740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.808764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.808784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.808804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.808829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.808850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.808871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.808891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.808911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.808931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.808951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.808972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.808983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.808993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.809017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.809037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.809057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.809077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.809096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.809116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.809136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.809159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.809180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.246 [2024-04-24 20:10:57.809201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.246 [2024-04-24 20:10:57.809221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.246 [2024-04-24 20:10:57.809241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.246 [2024-04-24 20:10:57.809260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.246 [2024-04-24 20:10:57.809285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.246 [2024-04-24 20:10:57.809306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.246 [2024-04-24 20:10:57.809325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.246 [2024-04-24 20:10:57.809346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.246 [2024-04-24 20:10:57.809357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.246 [2024-04-24 20:10:57.809366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.247 [2024-04-24 20:10:57.809395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.247 [2024-04-24 20:10:57.809414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.247 [2024-04-24 20:10:57.809434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.247 [2024-04-24 20:10:57.809455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.247 [2024-04-24 20:10:57.809476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.247 [2024-04-24 20:10:57.809500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.247 [2024-04-24 20:10:57.809520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.247 [2024-04-24 20:10:57.809540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:10:57.809565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:10:57.809586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:10:57.809605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:10:57.809627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:10:57.809647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:10:57.809668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:10:57.809687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:10:57.809707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:10:57.809727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:10:57.809747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:10:57.809767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:10:57.809787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:10:57.809812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:10:57.809835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:10:57.809856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157a960 is same with the state(5) to be set 00:19:30.247 [2024-04-24 20:10:57.809879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.247 [2024-04-24 20:10:57.809886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.247 [2024-04-24 20:10:57.809893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83384 len:8 PRP1 0x0 PRP2 0x0 00:19:30.247 [2024-04-24 20:10:57.809902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.809950] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x157a960 was disconnected and freed. reset controller. 00:19:30.247 [2024-04-24 20:10:57.809963] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:19:30.247 [2024-04-24 20:10:57.810009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.247 [2024-04-24 20:10:57.810021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.810031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.247 [2024-04-24 20:10:57.810041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.810051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.247 [2024-04-24 20:10:57.810060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.810069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.247 [2024-04-24 20:10:57.810078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:10:57.810088] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:30.247 [2024-04-24 20:10:57.813431] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.247 [2024-04-24 20:10:57.813484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15141d0 (9): Bad file descriptor 00:19:30.247 [2024-04-24 20:10:57.845122] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:30.247 [2024-04-24 20:11:01.344607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:11:01.344656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:11:01.344677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:11:01.344710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:11:01.344721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:11:01.344729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:11:01.344739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:11:01.344747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:11:01.344757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:11:01.344765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:11:01.344775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:11:01.344783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:11:01.344792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:11:01.344800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:11:01.344810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.247 [2024-04-24 20:11:01.344818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.247 [2024-04-24 20:11:01.344828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.248 [2024-04-24 20:11:01.344836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.344846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.248 [2024-04-24 20:11:01.344854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.344864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.248 [2024-04-24 20:11:01.344871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.344881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.248 [2024-04-24 20:11:01.344889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.344899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.248 [2024-04-24 20:11:01.344907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.344916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.248 [2024-04-24 20:11:01.344924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.344939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.248 [2024-04-24 20:11:01.344947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.344956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.248 [2024-04-24 20:11:01.344964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.344974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.248 [2024-04-24 20:11:01.344985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.344995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.248 [2024-04-24 20:11:01.345003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.248 [2024-04-24 20:11:01.345021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.248 [2024-04-24 20:11:01.345039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.248 [2024-04-24 20:11:01.345057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.248 [2024-04-24 20:11:01.345074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.248 [2024-04-24 20:11:01.345092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.248 [2024-04-24 20:11:01.345109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.248 [2024-04-24 20:11:01.345128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:119784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.248 [2024-04-24 20:11:01.345146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.248 [2024-04-24 20:11:01.345170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.248 [2024-04-24 20:11:01.345188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.248 [2024-04-24 20:11:01.345206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.248 [2024-04-24 20:11:01.345223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.248 [2024-04-24 20:11:01.345240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.248 [2024-04-24 20:11:01.345258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.248 [2024-04-24 20:11:01.345276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.248 [2024-04-24 20:11:01.345294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.248 [2024-04-24 20:11:01.345311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.248 [2024-04-24 20:11:01.345330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.248 [2024-04-24 20:11:01.345347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.248 [2024-04-24 20:11:01.345365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.248 [2024-04-24 20:11:01.345394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.248 [2024-04-24 20:11:01.345403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.345416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.345433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.345470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.345489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.345509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.345529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.345549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.345568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.345590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.345611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.345631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.345652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.345671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.345696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.345717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.345736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.345757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.345776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.345797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.345817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.345837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:119936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.345857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.345878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.345897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.345917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:119968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.345942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.345962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.345981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.345992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.346001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.346012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.346021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.346031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.346042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.346053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.346062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.346072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.249 [2024-04-24 20:11:01.346082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.346092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.346101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.346113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.346122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.346132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.346142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.346153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.346162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.346172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.346181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.346196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.346210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.249 [2024-04-24 20:11:01.346221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.249 [2024-04-24 20:11:01.346230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.250 [2024-04-24 20:11:01.346250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.250 [2024-04-24 20:11:01.346619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.250 [2024-04-24 20:11:01.346639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.250 [2024-04-24 20:11:01.346659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.250 [2024-04-24 20:11:01.346679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.250 [2024-04-24 20:11:01.346699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.250 [2024-04-24 20:11:01.346719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.250 [2024-04-24 20:11:01.346744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.250 [2024-04-24 20:11:01.346764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.346979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.346988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.347004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.347014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.347025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.347035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.347045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.347054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.250 [2024-04-24 20:11:01.347065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.250 [2024-04-24 20:11:01.347075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:01.347086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.251 [2024-04-24 20:11:01.347095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:01.347105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:01.347115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:01.347126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:01.347135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:01.347146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:01.347155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:01.347167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:01.347176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:01.347187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:119744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:01.347198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:01.347209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:01.347218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:01.347229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:01.347238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:01.347249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x157e730 is same with the state(5) to be set 00:19:30.251 [2024-04-24 20:11:01.347267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.251 [2024-04-24 20:11:01.347274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.251 [2024-04-24 20:11:01.347281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119768 len:8 PRP1 0x0 PRP2 0x0 00:19:30.251 [2024-04-24 20:11:01.347290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:01.347341] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x157e730 was disconnected and freed. reset controller. 00:19:30.251 [2024-04-24 20:11:01.347353] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:19:30.251 [2024-04-24 20:11:01.347407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.251 [2024-04-24 20:11:01.347420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:01.347431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.251 [2024-04-24 20:11:01.347440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:01.347450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.251 [2024-04-24 20:11:01.347459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:01.347468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.251 [2024-04-24 20:11:01.347477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:01.347487] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:30.251 [2024-04-24 20:11:01.350869] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.251 [2024-04-24 20:11:01.350921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15141d0 (9): Bad file descriptor 00:19:30.251 [2024-04-24 20:11:01.381352] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:30.251 [2024-04-24 20:11:05.780308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:05.780362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:05.780394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:05.780426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:05.780446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:05.780490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:05.780510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:05.780530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:05.780550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:05.780571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:05.780607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:05.780628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:05.780650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:05.780670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:05.780692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:05.780712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.251 [2024-04-24 20:11:05.780733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.251 [2024-04-24 20:11:05.780756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.251 [2024-04-24 20:11:05.780786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.251 [2024-04-24 20:11:05.780807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.251 [2024-04-24 20:11:05.780828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.251 [2024-04-24 20:11:05.780849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.251 [2024-04-24 20:11:05.780871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.251 [2024-04-24 20:11:05.780882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.252 [2024-04-24 20:11:05.780891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.780903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.252 [2024-04-24 20:11:05.780912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.780924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.780934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.780945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.780955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.780966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.780976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.780988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.780997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.252 [2024-04-24 20:11:05.781526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.252 [2024-04-24 20:11:05.781548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.252 [2024-04-24 20:11:05.781569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.252 [2024-04-24 20:11:05.781590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.252 [2024-04-24 20:11:05.781611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.252 [2024-04-24 20:11:05.781632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.252 [2024-04-24 20:11:05.781653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.252 [2024-04-24 20:11:05.781680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.252 [2024-04-24 20:11:05.781814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.252 [2024-04-24 20:11:05.781825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.253 [2024-04-24 20:11:05.781834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.781845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.253 [2024-04-24 20:11:05.781854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.781865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.253 [2024-04-24 20:11:05.781875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.781886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.253 [2024-04-24 20:11:05.781895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.781906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.253 [2024-04-24 20:11:05.781915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.781926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.253 [2024-04-24 20:11:05.781940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.781950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.253 [2024-04-24 20:11:05.781960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.781970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.253 [2024-04-24 20:11:05.781980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.781990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.253 [2024-04-24 20:11:05.781999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.253 [2024-04-24 20:11:05.782019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.253 [2024-04-24 20:11:05.782039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.253 [2024-04-24 20:11:05.782059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.253 [2024-04-24 20:11:05.782079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.253 [2024-04-24 20:11:05.782098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.253 [2024-04-24 20:11:05.782118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.253 [2024-04-24 20:11:05.782138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.253 [2024-04-24 20:11:05.782157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.253 [2024-04-24 20:11:05.782177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.253 [2024-04-24 20:11:05.782206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.253 [2024-04-24 20:11:05.782226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.253 [2024-04-24 20:11:05.782246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.253 [2024-04-24 20:11:05.782266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.253 [2024-04-24 20:11:05.782287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.253 [2024-04-24 20:11:05.782307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.253 [2024-04-24 20:11:05.782335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.253 [2024-04-24 20:11:05.782355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.253 [2024-04-24 20:11:05.782382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.253 [2024-04-24 20:11:05.782403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.253 [2024-04-24 20:11:05.782424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.253 [2024-04-24 20:11:05.782444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.253 [2024-04-24 20:11:05.782456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.253 [2024-04-24 20:11:05.782470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.782490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.782510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.782531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.782551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.782571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.782592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.782612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.782632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.782652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.782672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.782691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.254 [2024-04-24 20:11:05.782711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.254 [2024-04-24 20:11:05.782736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.254 [2024-04-24 20:11:05.782756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.254 [2024-04-24 20:11:05.782776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.254 [2024-04-24 20:11:05.782796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.254 [2024-04-24 20:11:05.782816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.254 [2024-04-24 20:11:05.782836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.254 [2024-04-24 20:11:05.782855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.254 [2024-04-24 20:11:05.782876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.254 [2024-04-24 20:11:05.782896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.254 [2024-04-24 20:11:05.782916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.254 [2024-04-24 20:11:05.782936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.254 [2024-04-24 20:11:05.782957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.254 [2024-04-24 20:11:05.782976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.782992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.254 [2024-04-24 20:11:05.783001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.783011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:30.254 [2024-04-24 20:11:05.783020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.783031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.783040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.783052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.783061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.783072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.783081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.783092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.783101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.783112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.783121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.783132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.783141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.783152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.254 [2024-04-24 20:11:05.783161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.783171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156a350 is same with the state(5) to be set 00:19:30.254 [2024-04-24 20:11:05.783183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:30.254 [2024-04-24 20:11:05.783191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:30.254 [2024-04-24 20:11:05.783198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91136 len:8 PRP1 0x0 PRP2 0x0 00:19:30.254 [2024-04-24 20:11:05.783207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.783254] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x156a350 was disconnected and freed. reset controller. 00:19:30.254 [2024-04-24 20:11:05.783266] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:19:30.254 [2024-04-24 20:11:05.783311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.254 [2024-04-24 20:11:05.783328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.254 [2024-04-24 20:11:05.783339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.255 [2024-04-24 20:11:05.783347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.255 [2024-04-24 20:11:05.783358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.255 [2024-04-24 20:11:05.783367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.255 [2024-04-24 20:11:05.783385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.255 [2024-04-24 20:11:05.783395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.255 [2024-04-24 20:11:05.783404] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:30.255 [2024-04-24 20:11:05.786795] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.255 [2024-04-24 20:11:05.786836] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15141d0 (9): Bad file descriptor 00:19:30.255 [2024-04-24 20:11:05.823592] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:30.255 00:19:30.255 Latency(us) 00:19:30.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.255 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:30.255 Verification LBA range: start 0x0 length 0x4000 00:19:30.255 NVMe0n1 : 15.01 10069.96 39.34 264.21 0.00 12360.38 547.33 25298.61 00:19:30.255 =================================================================================================================== 00:19:30.255 Total : 10069.96 39.34 264.21 0.00 12360.38 547.33 25298.61 00:19:30.255 Received shutdown signal, test time was about 15.000000 seconds 00:19:30.255 00:19:30.255 Latency(us) 00:19:30.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.255 =================================================================================================================== 00:19:30.255 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.255 20:11:11 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:30.255 20:11:11 -- host/failover.sh@65 -- # count=3 00:19:30.255 20:11:11 -- host/failover.sh@67 -- # (( count != 3 )) 00:19:30.255 20:11:11 -- host/failover.sh@73 -- # bdevperf_pid=72832 00:19:30.255 20:11:11 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:30.255 20:11:11 -- host/failover.sh@75 -- # waitforlisten 72832 /var/tmp/bdevperf.sock 00:19:30.255 20:11:11 -- common/autotest_common.sh@817 -- # '[' -z 72832 ']' 00:19:30.255 20:11:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.255 20:11:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:30.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.255 20:11:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.255 20:11:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:30.255 20:11:11 -- common/autotest_common.sh@10 -- # set +x 00:19:30.820 20:11:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:30.820 20:11:12 -- common/autotest_common.sh@850 -- # return 0 00:19:30.820 20:11:12 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:30.820 [2024-04-24 20:11:13.071468] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:31.078 20:11:13 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:19:31.079 [2024-04-24 20:11:13.275307] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:19:31.079 20:11:13 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:31.337 NVMe0n1 00:19:31.337 20:11:13 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:31.596 00:19:31.854 20:11:13 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:31.854 00:19:32.113 20:11:14 -- host/failover.sh@82 -- # grep -q NVMe0 00:19:32.113 20:11:14 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:32.113 20:11:14 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:32.372 20:11:14 -- host/failover.sh@87 -- # sleep 3 00:19:35.655 20:11:17 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:35.655 20:11:17 -- host/failover.sh@88 -- # grep -q NVMe0 00:19:35.655 20:11:17 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:35.655 20:11:17 -- host/failover.sh@90 -- # run_test_pid=72909 00:19:35.655 20:11:17 -- host/failover.sh@92 -- # wait 72909 00:19:37.029 0 00:19:37.029 20:11:18 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:37.029 [2024-04-24 20:11:12.035869] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:37.029 [2024-04-24 20:11:12.036040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72832 ] 00:19:37.029 [2024-04-24 20:11:12.174682] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.029 [2024-04-24 20:11:12.280090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.029 [2024-04-24 20:11:14.526363] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:19:37.029 [2024-04-24 20:11:14.526484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.029 [2024-04-24 20:11:14.526501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-04-24 20:11:14.526513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.029 [2024-04-24 20:11:14.526522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-04-24 20:11:14.526531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.029 [2024-04-24 20:11:14.526539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-04-24 20:11:14.526548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.029 [2024-04-24 20:11:14.526557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.029 [2024-04-24 20:11:14.526566] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:37.029 [2024-04-24 20:11:14.526605] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:37.029 [2024-04-24 20:11:14.526623] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd781d0 (9): Bad file descriptor 00:19:37.029 [2024-04-24 20:11:14.533313] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:37.029 Running I/O for 1 seconds... 00:19:37.029 00:19:37.029 Latency(us) 00:19:37.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.029 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:37.029 Verification LBA range: start 0x0 length 0x4000 00:19:37.029 NVMe0n1 : 1.01 10280.42 40.16 0.00 0.00 12374.96 1244.90 12821.02 00:19:37.029 =================================================================================================================== 00:19:37.029 Total : 10280.42 40.16 0.00 0.00 12374.96 1244.90 12821.02 00:19:37.029 20:11:18 -- host/failover.sh@95 -- # grep -q NVMe0 00:19:37.029 20:11:18 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:37.029 20:11:19 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:37.029 20:11:19 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:37.029 20:11:19 -- host/failover.sh@99 -- # grep -q NVMe0 00:19:37.287 20:11:19 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:37.543 20:11:19 -- host/failover.sh@101 -- # sleep 3 00:19:40.820 20:11:22 -- host/failover.sh@103 -- # grep -q NVMe0 00:19:40.820 20:11:22 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:40.820 20:11:22 -- host/failover.sh@108 -- # killprocess 72832 00:19:40.820 20:11:22 -- common/autotest_common.sh@936 -- # '[' -z 72832 ']' 00:19:40.820 20:11:22 -- common/autotest_common.sh@940 -- # kill -0 72832 00:19:40.820 20:11:22 -- common/autotest_common.sh@941 -- # uname 00:19:40.820 20:11:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:40.820 20:11:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72832 00:19:40.820 killing process with pid 72832 00:19:40.820 20:11:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:40.820 20:11:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:40.820 20:11:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72832' 00:19:40.820 20:11:22 -- common/autotest_common.sh@955 -- # kill 72832 00:19:40.820 20:11:22 -- common/autotest_common.sh@960 -- # wait 72832 00:19:41.077 20:11:23 -- host/failover.sh@110 -- # sync 00:19:41.077 20:11:23 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:41.335 20:11:23 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:41.335 20:11:23 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:41.335 20:11:23 -- host/failover.sh@116 -- # nvmftestfini 00:19:41.335 20:11:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:41.335 20:11:23 -- nvmf/common.sh@117 -- # sync 00:19:41.335 20:11:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:41.335 20:11:23 -- nvmf/common.sh@120 -- # set +e 00:19:41.335 20:11:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:41.335 20:11:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:41.335 rmmod nvme_tcp 00:19:41.335 rmmod nvme_fabrics 00:19:41.335 rmmod nvme_keyring 00:19:41.335 20:11:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:41.335 20:11:23 -- nvmf/common.sh@124 -- # set -e 00:19:41.335 20:11:23 -- nvmf/common.sh@125 -- # return 0 00:19:41.335 20:11:23 -- nvmf/common.sh@478 -- # '[' -n 72584 ']' 00:19:41.335 20:11:23 -- nvmf/common.sh@479 -- # killprocess 72584 00:19:41.335 20:11:23 -- common/autotest_common.sh@936 -- # '[' -z 72584 ']' 00:19:41.335 20:11:23 -- common/autotest_common.sh@940 -- # kill -0 72584 00:19:41.335 20:11:23 -- common/autotest_common.sh@941 -- # uname 00:19:41.335 20:11:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:41.335 20:11:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72584 00:19:41.335 killing process with pid 72584 00:19:41.335 20:11:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:41.335 20:11:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:41.335 20:11:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72584' 00:19:41.335 20:11:23 -- common/autotest_common.sh@955 -- # kill 72584 00:19:41.335 [2024-04-24 20:11:23.469259] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:41.335 20:11:23 -- common/autotest_common.sh@960 -- # wait 72584 00:19:41.594 20:11:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:41.594 20:11:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:41.594 20:11:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:41.594 20:11:23 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.594 20:11:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:41.594 20:11:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.594 20:11:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.594 20:11:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.594 20:11:23 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:41.594 00:19:41.594 real 0m31.437s 00:19:41.594 user 2m1.764s 00:19:41.594 sys 0m4.462s 00:19:41.594 20:11:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:41.594 20:11:23 -- common/autotest_common.sh@10 -- # set +x 00:19:41.594 ************************************ 00:19:41.594 END TEST nvmf_failover 00:19:41.594 ************************************ 00:19:41.594 20:11:23 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:41.594 20:11:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:41.594 20:11:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:41.594 20:11:23 -- common/autotest_common.sh@10 -- # set +x 00:19:41.854 ************************************ 00:19:41.854 START TEST nvmf_discovery 00:19:41.854 ************************************ 00:19:41.854 20:11:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:41.854 * Looking for test storage... 00:19:41.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:41.854 20:11:24 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:41.854 20:11:24 -- nvmf/common.sh@7 -- # uname -s 00:19:41.854 20:11:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.854 20:11:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.854 20:11:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.854 20:11:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.854 20:11:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.854 20:11:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.854 20:11:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.854 20:11:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.854 20:11:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.854 20:11:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.854 20:11:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:19:41.854 20:11:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:19:41.854 20:11:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.854 20:11:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.854 20:11:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:41.854 20:11:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.854 20:11:24 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:41.854 20:11:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.854 20:11:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.854 20:11:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.854 20:11:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.854 20:11:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.855 20:11:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.855 20:11:24 -- paths/export.sh@5 -- # export PATH 00:19:41.855 20:11:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.855 20:11:24 -- nvmf/common.sh@47 -- # : 0 00:19:41.855 20:11:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:41.855 20:11:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:41.855 20:11:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.855 20:11:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.855 20:11:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.855 20:11:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:41.855 20:11:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:41.855 20:11:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:41.855 20:11:24 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:41.855 20:11:24 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:41.855 20:11:24 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:41.855 20:11:24 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:41.855 20:11:24 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:41.855 20:11:24 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:41.855 20:11:24 -- host/discovery.sh@25 -- # nvmftestinit 00:19:41.855 20:11:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:41.855 20:11:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.855 20:11:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:41.855 20:11:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:41.855 20:11:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:41.855 20:11:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.855 20:11:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.855 20:11:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.855 20:11:24 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:41.855 20:11:24 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:41.855 20:11:24 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:41.855 20:11:24 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:41.855 20:11:24 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:41.855 20:11:24 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:41.855 20:11:24 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.855 20:11:24 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:41.855 20:11:24 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:41.855 20:11:24 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:41.855 20:11:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:41.855 20:11:24 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:41.855 20:11:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:41.855 20:11:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.855 20:11:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:41.855 20:11:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:41.855 20:11:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:41.855 20:11:24 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:41.855 20:11:24 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:41.855 20:11:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:42.113 Cannot find device "nvmf_tgt_br" 00:19:42.113 20:11:24 -- nvmf/common.sh@155 -- # true 00:19:42.113 20:11:24 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:42.113 Cannot find device "nvmf_tgt_br2" 00:19:42.113 20:11:24 -- nvmf/common.sh@156 -- # true 00:19:42.113 20:11:24 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:42.113 20:11:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:42.113 Cannot find device "nvmf_tgt_br" 00:19:42.113 20:11:24 -- nvmf/common.sh@158 -- # true 00:19:42.113 20:11:24 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:42.114 Cannot find device "nvmf_tgt_br2" 00:19:42.114 20:11:24 -- nvmf/common.sh@159 -- # true 00:19:42.114 20:11:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:42.114 20:11:24 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:42.114 20:11:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:42.114 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:42.114 20:11:24 -- nvmf/common.sh@162 -- # true 00:19:42.114 20:11:24 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:42.114 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:42.114 20:11:24 -- nvmf/common.sh@163 -- # true 00:19:42.114 20:11:24 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:42.114 20:11:24 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:42.114 20:11:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:42.114 20:11:24 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:42.114 20:11:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:42.114 20:11:24 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:42.114 20:11:24 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:42.114 20:11:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:42.114 20:11:24 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:42.114 20:11:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:42.114 20:11:24 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:42.114 20:11:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:42.114 20:11:24 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:42.114 20:11:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:42.114 20:11:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:42.114 20:11:24 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:42.114 20:11:24 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:42.114 20:11:24 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:42.114 20:11:24 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:42.114 20:11:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:42.372 20:11:24 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:42.372 20:11:24 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:42.372 20:11:24 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:42.372 20:11:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:42.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:19:42.372 00:19:42.372 --- 10.0.0.2 ping statistics --- 00:19:42.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.372 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:42.372 20:11:24 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:42.372 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:42.372 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:19:42.372 00:19:42.372 --- 10.0.0.3 ping statistics --- 00:19:42.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.372 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:42.372 20:11:24 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:42.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:19:42.372 00:19:42.372 --- 10.0.0.1 ping statistics --- 00:19:42.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.372 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:19:42.372 20:11:24 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.372 20:11:24 -- nvmf/common.sh@422 -- # return 0 00:19:42.372 20:11:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:42.372 20:11:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.372 20:11:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:42.372 20:11:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:42.372 20:11:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.372 20:11:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:42.372 20:11:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:42.372 20:11:24 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:42.372 20:11:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:42.372 20:11:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:42.372 20:11:24 -- common/autotest_common.sh@10 -- # set +x 00:19:42.372 20:11:24 -- nvmf/common.sh@470 -- # nvmfpid=73178 00:19:42.373 20:11:24 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:42.373 20:11:24 -- nvmf/common.sh@471 -- # waitforlisten 73178 00:19:42.373 20:11:24 -- common/autotest_common.sh@817 -- # '[' -z 73178 ']' 00:19:42.373 20:11:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.373 20:11:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:42.373 20:11:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.373 20:11:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:42.373 20:11:24 -- common/autotest_common.sh@10 -- # set +x 00:19:42.373 [2024-04-24 20:11:24.470575] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:42.373 [2024-04-24 20:11:24.470640] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.373 [2024-04-24 20:11:24.611209] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.630 [2024-04-24 20:11:24.707101] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.630 [2024-04-24 20:11:24.707145] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.630 [2024-04-24 20:11:24.707151] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.630 [2024-04-24 20:11:24.707155] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.630 [2024-04-24 20:11:24.707159] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.630 [2024-04-24 20:11:24.707181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.199 20:11:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:43.199 20:11:25 -- common/autotest_common.sh@850 -- # return 0 00:19:43.199 20:11:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:43.199 20:11:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:43.199 20:11:25 -- common/autotest_common.sh@10 -- # set +x 00:19:43.199 20:11:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.199 20:11:25 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:43.199 20:11:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.199 20:11:25 -- common/autotest_common.sh@10 -- # set +x 00:19:43.199 [2024-04-24 20:11:25.370389] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.199 20:11:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.199 20:11:25 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:43.199 20:11:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.199 20:11:25 -- common/autotest_common.sh@10 -- # set +x 00:19:43.199 [2024-04-24 20:11:25.382294] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:43.199 [2024-04-24 20:11:25.382518] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:43.199 20:11:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.199 20:11:25 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:43.199 20:11:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.199 20:11:25 -- common/autotest_common.sh@10 -- # set +x 00:19:43.199 null0 00:19:43.199 20:11:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.199 20:11:25 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:43.199 20:11:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.199 20:11:25 -- common/autotest_common.sh@10 -- # set +x 00:19:43.199 null1 00:19:43.199 20:11:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.199 20:11:25 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:43.199 20:11:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.199 20:11:25 -- common/autotest_common.sh@10 -- # set +x 00:19:43.199 20:11:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.199 20:11:25 -- host/discovery.sh@45 -- # hostpid=73213 00:19:43.199 20:11:25 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:43.199 20:11:25 -- host/discovery.sh@46 -- # waitforlisten 73213 /tmp/host.sock 00:19:43.199 20:11:25 -- common/autotest_common.sh@817 -- # '[' -z 73213 ']' 00:19:43.199 20:11:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:19:43.199 20:11:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:43.199 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:43.199 20:11:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:43.199 20:11:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:43.199 20:11:25 -- common/autotest_common.sh@10 -- # set +x 00:19:43.458 [2024-04-24 20:11:25.467620] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:43.458 [2024-04-24 20:11:25.467693] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73213 ] 00:19:43.458 [2024-04-24 20:11:25.608395] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.458 [2024-04-24 20:11:25.711461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.394 20:11:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:44.394 20:11:26 -- common/autotest_common.sh@850 -- # return 0 00:19:44.394 20:11:26 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:44.394 20:11:26 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:44.394 20:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.394 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.394 20:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.394 20:11:26 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:44.394 20:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.394 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.394 20:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.394 20:11:26 -- host/discovery.sh@72 -- # notify_id=0 00:19:44.394 20:11:26 -- host/discovery.sh@83 -- # get_subsystem_names 00:19:44.394 20:11:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:44.394 20:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.394 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.394 20:11:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:44.394 20:11:26 -- host/discovery.sh@59 -- # sort 00:19:44.394 20:11:26 -- host/discovery.sh@59 -- # xargs 00:19:44.394 20:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.394 20:11:26 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:44.394 20:11:26 -- host/discovery.sh@84 -- # get_bdev_list 00:19:44.394 20:11:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.394 20:11:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:44.394 20:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.394 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.394 20:11:26 -- host/discovery.sh@55 -- # sort 00:19:44.394 20:11:26 -- host/discovery.sh@55 -- # xargs 00:19:44.394 20:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.394 20:11:26 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:44.394 20:11:26 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:44.394 20:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.394 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.394 20:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.394 20:11:26 -- host/discovery.sh@87 -- # get_subsystem_names 00:19:44.394 20:11:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:44.394 20:11:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:44.394 20:11:26 -- host/discovery.sh@59 -- # sort 00:19:44.394 20:11:26 -- host/discovery.sh@59 -- # xargs 00:19:44.394 20:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.394 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.394 20:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.394 20:11:26 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:44.394 20:11:26 -- host/discovery.sh@88 -- # get_bdev_list 00:19:44.394 20:11:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.394 20:11:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:44.394 20:11:26 -- host/discovery.sh@55 -- # sort 00:19:44.394 20:11:26 -- host/discovery.sh@55 -- # xargs 00:19:44.394 20:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.394 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.394 20:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.394 20:11:26 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:44.394 20:11:26 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:44.394 20:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.394 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.394 20:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.394 20:11:26 -- host/discovery.sh@91 -- # get_subsystem_names 00:19:44.394 20:11:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:44.394 20:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.394 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.394 20:11:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:44.394 20:11:26 -- host/discovery.sh@59 -- # xargs 00:19:44.394 20:11:26 -- host/discovery.sh@59 -- # sort 00:19:44.394 20:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.394 20:11:26 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:44.394 20:11:26 -- host/discovery.sh@92 -- # get_bdev_list 00:19:44.394 20:11:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.394 20:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.394 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.394 20:11:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:44.394 20:11:26 -- host/discovery.sh@55 -- # sort 00:19:44.394 20:11:26 -- host/discovery.sh@55 -- # xargs 00:19:44.394 20:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.394 20:11:26 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:44.394 20:11:26 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:44.394 20:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.394 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.653 [2024-04-24 20:11:26.652303] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.653 20:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.653 20:11:26 -- host/discovery.sh@97 -- # get_subsystem_names 00:19:44.653 20:11:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:44.653 20:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.653 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.653 20:11:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:44.653 20:11:26 -- host/discovery.sh@59 -- # sort 00:19:44.653 20:11:26 -- host/discovery.sh@59 -- # xargs 00:19:44.653 20:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.653 20:11:26 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:44.653 20:11:26 -- host/discovery.sh@98 -- # get_bdev_list 00:19:44.653 20:11:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.653 20:11:26 -- host/discovery.sh@55 -- # sort 00:19:44.653 20:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.653 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.653 20:11:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:44.653 20:11:26 -- host/discovery.sh@55 -- # xargs 00:19:44.653 20:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.653 20:11:26 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:44.653 20:11:26 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:44.653 20:11:26 -- host/discovery.sh@79 -- # expected_count=0 00:19:44.653 20:11:26 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:44.653 20:11:26 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:44.653 20:11:26 -- common/autotest_common.sh@901 -- # local max=10 00:19:44.653 20:11:26 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:44.653 20:11:26 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:44.653 20:11:26 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:44.653 20:11:26 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:44.653 20:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.653 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.653 20:11:26 -- host/discovery.sh@74 -- # jq '. | length' 00:19:44.653 20:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.653 20:11:26 -- host/discovery.sh@74 -- # notification_count=0 00:19:44.653 20:11:26 -- host/discovery.sh@75 -- # notify_id=0 00:19:44.653 20:11:26 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:44.653 20:11:26 -- common/autotest_common.sh@904 -- # return 0 00:19:44.653 20:11:26 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:44.653 20:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.653 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.653 20:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.653 20:11:26 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:44.653 20:11:26 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:44.653 20:11:26 -- common/autotest_common.sh@901 -- # local max=10 00:19:44.653 20:11:26 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:44.653 20:11:26 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:44.653 20:11:26 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:44.653 20:11:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:44.653 20:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.653 20:11:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:44.653 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:19:44.653 20:11:26 -- host/discovery.sh@59 -- # sort 00:19:44.653 20:11:26 -- host/discovery.sh@59 -- # xargs 00:19:44.653 20:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.653 20:11:26 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:19:44.653 20:11:26 -- common/autotest_common.sh@906 -- # sleep 1 00:19:45.221 [2024-04-24 20:11:27.339120] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:45.221 [2024-04-24 20:11:27.339157] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:45.221 [2024-04-24 20:11:27.339171] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:45.221 [2024-04-24 20:11:27.345145] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:45.221 [2024-04-24 20:11:27.400652] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:45.221 [2024-04-24 20:11:27.400689] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:45.787 20:11:27 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:45.787 20:11:27 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:45.787 20:11:27 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:45.787 20:11:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:45.787 20:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.787 20:11:27 -- common/autotest_common.sh@10 -- # set +x 00:19:45.787 20:11:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:45.787 20:11:27 -- host/discovery.sh@59 -- # sort 00:19:45.787 20:11:27 -- host/discovery.sh@59 -- # xargs 00:19:45.787 20:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.787 20:11:27 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.787 20:11:27 -- common/autotest_common.sh@904 -- # return 0 00:19:45.787 20:11:27 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:45.787 20:11:27 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:45.787 20:11:27 -- common/autotest_common.sh@901 -- # local max=10 00:19:45.787 20:11:27 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:45.787 20:11:27 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:45.787 20:11:27 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:45.787 20:11:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:45.787 20:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.787 20:11:27 -- common/autotest_common.sh@10 -- # set +x 00:19:45.787 20:11:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:45.787 20:11:27 -- host/discovery.sh@55 -- # sort 00:19:45.787 20:11:27 -- host/discovery.sh@55 -- # xargs 00:19:45.787 20:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.787 20:11:27 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:45.787 20:11:27 -- common/autotest_common.sh@904 -- # return 0 00:19:45.787 20:11:27 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:45.787 20:11:27 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:45.787 20:11:27 -- common/autotest_common.sh@901 -- # local max=10 00:19:45.787 20:11:27 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:45.787 20:11:27 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:45.787 20:11:27 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:19:45.787 20:11:27 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:45.787 20:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.787 20:11:27 -- common/autotest_common.sh@10 -- # set +x 00:19:45.787 20:11:27 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:45.787 20:11:27 -- host/discovery.sh@63 -- # sort -n 00:19:45.787 20:11:27 -- host/discovery.sh@63 -- # xargs 00:19:45.787 20:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.787 20:11:28 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:19:45.787 20:11:28 -- common/autotest_common.sh@904 -- # return 0 00:19:45.787 20:11:28 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:45.787 20:11:28 -- host/discovery.sh@79 -- # expected_count=1 00:19:45.787 20:11:28 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:45.788 20:11:28 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:45.788 20:11:28 -- common/autotest_common.sh@901 -- # local max=10 00:19:45.788 20:11:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:45.788 20:11:28 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:45.788 20:11:28 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:46.046 20:11:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:46.046 20:11:28 -- host/discovery.sh@74 -- # jq '. | length' 00:19:46.046 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.046 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.046 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.046 20:11:28 -- host/discovery.sh@74 -- # notification_count=1 00:19:46.046 20:11:28 -- host/discovery.sh@75 -- # notify_id=1 00:19:46.046 20:11:28 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:46.046 20:11:28 -- common/autotest_common.sh@904 -- # return 0 00:19:46.046 20:11:28 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:46.046 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.046 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.046 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.046 20:11:28 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:46.046 20:11:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:46.046 20:11:28 -- common/autotest_common.sh@901 -- # local max=10 00:19:46.046 20:11:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:46.046 20:11:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:46.046 20:11:28 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:46.046 20:11:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:46.046 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.046 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.046 20:11:28 -- host/discovery.sh@55 -- # sort 00:19:46.046 20:11:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:46.046 20:11:28 -- host/discovery.sh@55 -- # xargs 00:19:46.046 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.046 20:11:28 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:46.046 20:11:28 -- common/autotest_common.sh@904 -- # return 0 00:19:46.047 20:11:28 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:46.047 20:11:28 -- host/discovery.sh@79 -- # expected_count=1 00:19:46.047 20:11:28 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:46.047 20:11:28 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:46.047 20:11:28 -- common/autotest_common.sh@901 -- # local max=10 00:19:46.047 20:11:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:46.047 20:11:28 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:46.047 20:11:28 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:46.047 20:11:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:46.047 20:11:28 -- host/discovery.sh@74 -- # jq '. | length' 00:19:46.047 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.047 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.047 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.047 20:11:28 -- host/discovery.sh@74 -- # notification_count=1 00:19:46.047 20:11:28 -- host/discovery.sh@75 -- # notify_id=2 00:19:46.047 20:11:28 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:46.047 20:11:28 -- common/autotest_common.sh@904 -- # return 0 00:19:46.047 20:11:28 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:19:46.047 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.047 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.047 [2024-04-24 20:11:28.214626] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:46.047 [2024-04-24 20:11:28.214919] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:46.047 [2024-04-24 20:11:28.214947] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:46.047 [2024-04-24 20:11:28.220894] bdev_nvme.c:6830:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:19:46.047 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.047 20:11:28 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:46.047 20:11:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:46.047 20:11:28 -- common/autotest_common.sh@901 -- # local max=10 00:19:46.047 20:11:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:46.047 20:11:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:46.047 20:11:28 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:46.047 20:11:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:46.047 20:11:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:46.047 20:11:28 -- host/discovery.sh@59 -- # xargs 00:19:46.047 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.047 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.047 20:11:28 -- host/discovery.sh@59 -- # sort 00:19:46.047 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.047 20:11:28 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.047 20:11:28 -- common/autotest_common.sh@904 -- # return 0 00:19:46.047 20:11:28 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:46.047 20:11:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:46.047 20:11:28 -- common/autotest_common.sh@901 -- # local max=10 00:19:46.047 20:11:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:46.047 20:11:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:46.047 20:11:28 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:46.047 20:11:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:46.047 20:11:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:46.047 20:11:28 -- host/discovery.sh@55 -- # xargs 00:19:46.047 20:11:28 -- host/discovery.sh@55 -- # sort 00:19:46.047 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.047 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.047 [2024-04-24 20:11:28.280121] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:46.047 [2024-04-24 20:11:28.280144] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:46.047 [2024-04-24 20:11:28.280149] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:46.047 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.305 20:11:28 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:46.305 20:11:28 -- common/autotest_common.sh@904 -- # return 0 00:19:46.305 20:11:28 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:46.306 20:11:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:46.306 20:11:28 -- common/autotest_common.sh@901 -- # local max=10 00:19:46.306 20:11:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:46.306 20:11:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:46.306 20:11:28 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:19:46.306 20:11:28 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:46.306 20:11:28 -- host/discovery.sh@63 -- # xargs 00:19:46.306 20:11:28 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:46.306 20:11:28 -- host/discovery.sh@63 -- # sort -n 00:19:46.306 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.306 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.306 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.306 20:11:28 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:46.306 20:11:28 -- common/autotest_common.sh@904 -- # return 0 00:19:46.306 20:11:28 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:46.306 20:11:28 -- host/discovery.sh@79 -- # expected_count=0 00:19:46.306 20:11:28 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:46.306 20:11:28 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:46.306 20:11:28 -- common/autotest_common.sh@901 -- # local max=10 00:19:46.306 20:11:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:46.306 20:11:28 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:46.306 20:11:28 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:46.306 20:11:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:46.306 20:11:28 -- host/discovery.sh@74 -- # jq '. | length' 00:19:46.306 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.306 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.306 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.306 20:11:28 -- host/discovery.sh@74 -- # notification_count=0 00:19:46.306 20:11:28 -- host/discovery.sh@75 -- # notify_id=2 00:19:46.306 20:11:28 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:46.306 20:11:28 -- common/autotest_common.sh@904 -- # return 0 00:19:46.306 20:11:28 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:46.306 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.306 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.306 [2024-04-24 20:11:28.391000] bdev_nvme.c:6888:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:46.306 [2024-04-24 20:11:28.391031] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:46.306 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.306 20:11:28 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:46.306 20:11:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:46.306 20:11:28 -- common/autotest_common.sh@901 -- # local max=10 00:19:46.306 20:11:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:46.306 [2024-04-24 20:11:28.396534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.306 [2024-04-24 20:11:28.396560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.306 [2024-04-24 20:11:28.396568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.306 [2024-04-24 20:11:28.396575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.306 [2024-04-24 20:11:28.396581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.306 [2024-04-24 20:11:28.396587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.306 [2024-04-24 20:11:28.396593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.306 [2024-04-24 20:11:28.396598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.306 [2024-04-24 20:11:28.396604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f070 is same with the state(5) to be set 00:19:46.306 20:11:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:46.306 [2024-04-24 20:11:28.396975] bdev_nvme.c:6693:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:19:46.306 [2024-04-24 20:11:28.396994] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:46.306 [2024-04-24 20:11:28.397040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e7f070 (9): Bad file descriptor 00:19:46.306 20:11:28 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:46.306 20:11:28 -- host/discovery.sh@59 -- # sort 00:19:46.306 20:11:28 -- host/discovery.sh@59 -- # xargs 00:19:46.306 20:11:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:46.306 20:11:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:46.306 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.306 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.306 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.306 20:11:28 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.306 20:11:28 -- common/autotest_common.sh@904 -- # return 0 00:19:46.306 20:11:28 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:46.306 20:11:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:46.306 20:11:28 -- common/autotest_common.sh@901 -- # local max=10 00:19:46.306 20:11:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:46.306 20:11:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:46.306 20:11:28 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:46.306 20:11:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:46.306 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.306 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.306 20:11:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:46.306 20:11:28 -- host/discovery.sh@55 -- # sort 00:19:46.306 20:11:28 -- host/discovery.sh@55 -- # xargs 00:19:46.306 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.306 20:11:28 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:46.306 20:11:28 -- common/autotest_common.sh@904 -- # return 0 00:19:46.306 20:11:28 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:46.306 20:11:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:46.306 20:11:28 -- common/autotest_common.sh@901 -- # local max=10 00:19:46.306 20:11:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:46.306 20:11:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:46.306 20:11:28 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:19:46.306 20:11:28 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:46.306 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.306 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.306 20:11:28 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:46.306 20:11:28 -- host/discovery.sh@63 -- # sort -n 00:19:46.306 20:11:28 -- host/discovery.sh@63 -- # xargs 00:19:46.306 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.565 20:11:28 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:19:46.565 20:11:28 -- common/autotest_common.sh@904 -- # return 0 00:19:46.565 20:11:28 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:46.565 20:11:28 -- host/discovery.sh@79 -- # expected_count=0 00:19:46.565 20:11:28 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:46.565 20:11:28 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:46.565 20:11:28 -- common/autotest_common.sh@901 -- # local max=10 00:19:46.565 20:11:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:46.565 20:11:28 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:46.565 20:11:28 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:46.565 20:11:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:46.565 20:11:28 -- host/discovery.sh@74 -- # jq '. | length' 00:19:46.565 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.565 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.565 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.565 20:11:28 -- host/discovery.sh@74 -- # notification_count=0 00:19:46.565 20:11:28 -- host/discovery.sh@75 -- # notify_id=2 00:19:46.565 20:11:28 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:46.565 20:11:28 -- common/autotest_common.sh@904 -- # return 0 00:19:46.565 20:11:28 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:46.565 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.565 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.565 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.565 20:11:28 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:46.565 20:11:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:46.565 20:11:28 -- common/autotest_common.sh@901 -- # local max=10 00:19:46.565 20:11:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:46.565 20:11:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:46.565 20:11:28 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:19:46.565 20:11:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:46.565 20:11:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:46.565 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.565 20:11:28 -- host/discovery.sh@59 -- # sort 00:19:46.566 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.566 20:11:28 -- host/discovery.sh@59 -- # xargs 00:19:46.566 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.566 20:11:28 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:19:46.566 20:11:28 -- common/autotest_common.sh@904 -- # return 0 00:19:46.566 20:11:28 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:46.566 20:11:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:46.566 20:11:28 -- common/autotest_common.sh@901 -- # local max=10 00:19:46.566 20:11:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:46.566 20:11:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:46.566 20:11:28 -- common/autotest_common.sh@903 -- # get_bdev_list 00:19:46.566 20:11:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:46.566 20:11:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:46.566 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.566 20:11:28 -- host/discovery.sh@55 -- # sort 00:19:46.566 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.566 20:11:28 -- host/discovery.sh@55 -- # xargs 00:19:46.566 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.566 20:11:28 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:19:46.566 20:11:28 -- common/autotest_common.sh@904 -- # return 0 00:19:46.566 20:11:28 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:46.566 20:11:28 -- host/discovery.sh@79 -- # expected_count=2 00:19:46.566 20:11:28 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:46.566 20:11:28 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:46.566 20:11:28 -- common/autotest_common.sh@901 -- # local max=10 00:19:46.566 20:11:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:19:46.566 20:11:28 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:46.566 20:11:28 -- common/autotest_common.sh@903 -- # get_notification_count 00:19:46.566 20:11:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:46.566 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.566 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:46.566 20:11:28 -- host/discovery.sh@74 -- # jq '. | length' 00:19:46.566 20:11:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.566 20:11:28 -- host/discovery.sh@74 -- # notification_count=2 00:19:46.566 20:11:28 -- host/discovery.sh@75 -- # notify_id=4 00:19:46.566 20:11:28 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:19:46.566 20:11:28 -- common/autotest_common.sh@904 -- # return 0 00:19:46.566 20:11:28 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:46.566 20:11:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.566 20:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:47.956 [2024-04-24 20:11:29.784959] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:47.956 [2024-04-24 20:11:29.784998] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:47.956 [2024-04-24 20:11:29.785013] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:47.956 [2024-04-24 20:11:29.790972] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:19:47.956 [2024-04-24 20:11:29.849893] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:47.956 [2024-04-24 20:11:29.849934] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:47.956 20:11:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.956 20:11:29 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:47.956 20:11:29 -- common/autotest_common.sh@638 -- # local es=0 00:19:47.956 20:11:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:47.956 20:11:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:47.956 20:11:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.956 20:11:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:47.956 20:11:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.956 20:11:29 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:47.956 20:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.956 20:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.956 request: 00:19:47.956 { 00:19:47.956 "name": "nvme", 00:19:47.956 "trtype": "tcp", 00:19:47.956 "traddr": "10.0.0.2", 00:19:47.956 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:47.956 "adrfam": "ipv4", 00:19:47.956 "trsvcid": "8009", 00:19:47.956 "wait_for_attach": true, 00:19:47.956 "method": "bdev_nvme_start_discovery", 00:19:47.956 "req_id": 1 00:19:47.956 } 00:19:47.956 Got JSON-RPC error response 00:19:47.956 response: 00:19:47.956 { 00:19:47.956 "code": -17, 00:19:47.956 "message": "File exists" 00:19:47.956 } 00:19:47.956 20:11:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:47.956 20:11:29 -- common/autotest_common.sh@641 -- # es=1 00:19:47.956 20:11:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:47.956 20:11:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:47.956 20:11:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:47.956 20:11:29 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:47.956 20:11:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:47.956 20:11:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:47.956 20:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.956 20:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.956 20:11:29 -- host/discovery.sh@67 -- # sort 00:19:47.956 20:11:29 -- host/discovery.sh@67 -- # xargs 00:19:47.956 20:11:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.956 20:11:29 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:47.956 20:11:29 -- host/discovery.sh@146 -- # get_bdev_list 00:19:47.956 20:11:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:47.956 20:11:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:47.956 20:11:29 -- host/discovery.sh@55 -- # sort 00:19:47.956 20:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.956 20:11:29 -- host/discovery.sh@55 -- # xargs 00:19:47.956 20:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.956 20:11:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.956 20:11:29 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:47.956 20:11:29 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:47.956 20:11:29 -- common/autotest_common.sh@638 -- # local es=0 00:19:47.956 20:11:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:47.956 20:11:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:47.956 20:11:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.956 20:11:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:47.956 20:11:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.956 20:11:29 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:47.956 20:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.956 20:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.956 request: 00:19:47.956 { 00:19:47.956 "name": "nvme_second", 00:19:47.956 "trtype": "tcp", 00:19:47.956 "traddr": "10.0.0.2", 00:19:47.957 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:47.957 "adrfam": "ipv4", 00:19:47.957 "trsvcid": "8009", 00:19:47.957 "wait_for_attach": true, 00:19:47.957 "method": "bdev_nvme_start_discovery", 00:19:47.957 "req_id": 1 00:19:47.957 } 00:19:47.957 Got JSON-RPC error response 00:19:47.957 response: 00:19:47.957 { 00:19:47.957 "code": -17, 00:19:47.957 "message": "File exists" 00:19:47.957 } 00:19:47.957 20:11:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:47.957 20:11:29 -- common/autotest_common.sh@641 -- # es=1 00:19:47.957 20:11:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:47.957 20:11:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:47.957 20:11:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:47.957 20:11:29 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:47.957 20:11:29 -- host/discovery.sh@67 -- # xargs 00:19:47.957 20:11:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:47.957 20:11:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:47.957 20:11:29 -- host/discovery.sh@67 -- # sort 00:19:47.957 20:11:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.957 20:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:47.957 20:11:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.957 20:11:30 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:47.957 20:11:30 -- host/discovery.sh@152 -- # get_bdev_list 00:19:47.957 20:11:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:47.957 20:11:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:47.957 20:11:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.957 20:11:30 -- common/autotest_common.sh@10 -- # set +x 00:19:47.957 20:11:30 -- host/discovery.sh@55 -- # sort 00:19:47.957 20:11:30 -- host/discovery.sh@55 -- # xargs 00:19:47.957 20:11:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.957 20:11:30 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:47.957 20:11:30 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:47.957 20:11:30 -- common/autotest_common.sh@638 -- # local es=0 00:19:47.957 20:11:30 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:47.957 20:11:30 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:47.957 20:11:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.957 20:11:30 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:47.957 20:11:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:47.957 20:11:30 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:47.957 20:11:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.957 20:11:30 -- common/autotest_common.sh@10 -- # set +x 00:19:48.892 [2024-04-24 20:11:31.101776] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.892 [2024-04-24 20:11:31.101876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.892 [2024-04-24 20:11:31.101902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.892 [2024-04-24 20:11:31.101912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e78de0 with addr=10.0.0.2, port=8010 00:19:48.892 [2024-04-24 20:11:31.101930] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:48.892 [2024-04-24 20:11:31.101938] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:48.892 [2024-04-24 20:11:31.101944] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:50.268 [2024-04-24 20:11:32.099851] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:50.268 [2024-04-24 20:11:32.099935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:50.268 [2024-04-24 20:11:32.099961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:50.268 [2024-04-24 20:11:32.099971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee74c0 with addr=10.0.0.2, port=8010 00:19:50.268 [2024-04-24 20:11:32.099988] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:50.268 [2024-04-24 20:11:32.099996] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:50.268 [2024-04-24 20:11:32.100003] bdev_nvme.c:6968:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:19:51.202 [2024-04-24 20:11:33.097790] bdev_nvme.c:6949:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:19:51.202 request: 00:19:51.202 { 00:19:51.202 "name": "nvme_second", 00:19:51.202 "trtype": "tcp", 00:19:51.202 "traddr": "10.0.0.2", 00:19:51.202 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:51.202 "adrfam": "ipv4", 00:19:51.202 "trsvcid": "8010", 00:19:51.202 "attach_timeout_ms": 3000, 00:19:51.202 "method": "bdev_nvme_start_discovery", 00:19:51.202 "req_id": 1 00:19:51.202 } 00:19:51.202 Got JSON-RPC error response 00:19:51.202 response: 00:19:51.202 { 00:19:51.202 "code": -110, 00:19:51.202 "message": "Connection timed out" 00:19:51.202 } 00:19:51.202 20:11:33 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:51.202 20:11:33 -- common/autotest_common.sh@641 -- # es=1 00:19:51.202 20:11:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:51.202 20:11:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:51.202 20:11:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:51.202 20:11:33 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:51.202 20:11:33 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:51.202 20:11:33 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:51.202 20:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:51.202 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:19:51.202 20:11:33 -- host/discovery.sh@67 -- # sort 00:19:51.202 20:11:33 -- host/discovery.sh@67 -- # xargs 00:19:51.202 20:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:51.202 20:11:33 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:51.202 20:11:33 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:51.202 20:11:33 -- host/discovery.sh@161 -- # kill 73213 00:19:51.202 20:11:33 -- host/discovery.sh@162 -- # nvmftestfini 00:19:51.202 20:11:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:51.202 20:11:33 -- nvmf/common.sh@117 -- # sync 00:19:51.202 20:11:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:51.202 20:11:33 -- nvmf/common.sh@120 -- # set +e 00:19:51.202 20:11:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:51.202 20:11:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:51.202 rmmod nvme_tcp 00:19:51.202 rmmod nvme_fabrics 00:19:51.202 rmmod nvme_keyring 00:19:51.202 20:11:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:51.202 20:11:33 -- nvmf/common.sh@124 -- # set -e 00:19:51.202 20:11:33 -- nvmf/common.sh@125 -- # return 0 00:19:51.202 20:11:33 -- nvmf/common.sh@478 -- # '[' -n 73178 ']' 00:19:51.202 20:11:33 -- nvmf/common.sh@479 -- # killprocess 73178 00:19:51.202 20:11:33 -- common/autotest_common.sh@936 -- # '[' -z 73178 ']' 00:19:51.202 20:11:33 -- common/autotest_common.sh@940 -- # kill -0 73178 00:19:51.202 20:11:33 -- common/autotest_common.sh@941 -- # uname 00:19:51.202 20:11:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:51.202 20:11:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73178 00:19:51.202 20:11:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:51.202 20:11:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:51.202 killing process with pid 73178 00:19:51.202 20:11:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73178' 00:19:51.202 20:11:33 -- common/autotest_common.sh@955 -- # kill 73178 00:19:51.202 [2024-04-24 20:11:33.304884] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:51.202 20:11:33 -- common/autotest_common.sh@960 -- # wait 73178 00:19:51.458 20:11:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:51.458 20:11:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:51.459 20:11:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:51.459 20:11:33 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.459 20:11:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:51.459 20:11:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.459 20:11:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.459 20:11:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.459 20:11:33 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:51.459 00:19:51.459 real 0m9.659s 00:19:51.459 user 0m18.227s 00:19:51.459 sys 0m1.952s 00:19:51.459 20:11:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:51.459 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:19:51.459 ************************************ 00:19:51.459 END TEST nvmf_discovery 00:19:51.459 ************************************ 00:19:51.459 20:11:33 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:51.459 20:11:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:51.459 20:11:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:51.459 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:19:51.716 ************************************ 00:19:51.716 START TEST nvmf_discovery_remove_ifc 00:19:51.716 ************************************ 00:19:51.716 20:11:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:51.716 * Looking for test storage... 00:19:51.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:51.716 20:11:33 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:51.716 20:11:33 -- nvmf/common.sh@7 -- # uname -s 00:19:51.716 20:11:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.716 20:11:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.716 20:11:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.716 20:11:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.716 20:11:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.716 20:11:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.716 20:11:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.716 20:11:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.717 20:11:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.717 20:11:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.717 20:11:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:19:51.717 20:11:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:19:51.717 20:11:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.717 20:11:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.717 20:11:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:51.717 20:11:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.717 20:11:33 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:51.717 20:11:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.717 20:11:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.717 20:11:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.717 20:11:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.717 20:11:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.717 20:11:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.717 20:11:33 -- paths/export.sh@5 -- # export PATH 00:19:51.717 20:11:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.717 20:11:33 -- nvmf/common.sh@47 -- # : 0 00:19:51.717 20:11:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:51.717 20:11:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:51.717 20:11:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.717 20:11:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.717 20:11:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.717 20:11:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:51.717 20:11:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:51.717 20:11:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:51.717 20:11:33 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:51.717 20:11:33 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:51.717 20:11:33 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:51.717 20:11:33 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:51.717 20:11:33 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:51.717 20:11:33 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:51.717 20:11:33 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:51.717 20:11:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:51.717 20:11:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.717 20:11:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:51.717 20:11:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:51.717 20:11:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:51.717 20:11:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.717 20:11:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.717 20:11:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.717 20:11:33 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:51.717 20:11:33 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:51.717 20:11:33 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:51.717 20:11:33 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:51.717 20:11:33 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:51.717 20:11:33 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:51.717 20:11:33 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.717 20:11:33 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.717 20:11:33 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:51.717 20:11:33 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:51.717 20:11:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:51.717 20:11:33 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:51.717 20:11:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:51.717 20:11:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.717 20:11:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:51.717 20:11:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:51.717 20:11:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:51.717 20:11:33 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:51.717 20:11:33 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:51.717 20:11:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:51.717 Cannot find device "nvmf_tgt_br" 00:19:51.717 20:11:33 -- nvmf/common.sh@155 -- # true 00:19:51.717 20:11:33 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:51.717 Cannot find device "nvmf_tgt_br2" 00:19:51.717 20:11:33 -- nvmf/common.sh@156 -- # true 00:19:51.717 20:11:33 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:51.717 20:11:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:51.717 Cannot find device "nvmf_tgt_br" 00:19:51.717 20:11:33 -- nvmf/common.sh@158 -- # true 00:19:51.717 20:11:33 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:51.717 Cannot find device "nvmf_tgt_br2" 00:19:51.976 20:11:33 -- nvmf/common.sh@159 -- # true 00:19:51.976 20:11:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:51.976 20:11:34 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:51.976 20:11:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:51.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:51.976 20:11:34 -- nvmf/common.sh@162 -- # true 00:19:51.976 20:11:34 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:51.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:51.976 20:11:34 -- nvmf/common.sh@163 -- # true 00:19:51.976 20:11:34 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:51.976 20:11:34 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:51.976 20:11:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:51.976 20:11:34 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:51.976 20:11:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:51.976 20:11:34 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:51.976 20:11:34 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:51.976 20:11:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:51.976 20:11:34 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:51.976 20:11:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:51.976 20:11:34 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:51.976 20:11:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:51.976 20:11:34 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:51.976 20:11:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:51.976 20:11:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:51.976 20:11:34 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:51.976 20:11:34 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:51.976 20:11:34 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:51.976 20:11:34 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:51.976 20:11:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:51.976 20:11:34 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:51.976 20:11:34 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:51.976 20:11:34 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:51.976 20:11:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:51.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:19:51.976 00:19:51.976 --- 10.0.0.2 ping statistics --- 00:19:51.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.976 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:51.976 20:11:34 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:51.976 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:51.976 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:19:51.976 00:19:51.976 --- 10.0.0.3 ping statistics --- 00:19:51.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.976 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:19:51.976 20:11:34 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:51.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:19:51.976 00:19:51.976 --- 10.0.0.1 ping statistics --- 00:19:51.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.976 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:19:51.976 20:11:34 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.976 20:11:34 -- nvmf/common.sh@422 -- # return 0 00:19:51.976 20:11:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:51.976 20:11:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.976 20:11:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:51.976 20:11:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:51.976 20:11:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.976 20:11:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:51.976 20:11:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:51.976 20:11:34 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:51.976 20:11:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:51.976 20:11:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:51.976 20:11:34 -- common/autotest_common.sh@10 -- # set +x 00:19:51.976 20:11:34 -- nvmf/common.sh@470 -- # nvmfpid=73660 00:19:51.976 20:11:34 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:51.976 20:11:34 -- nvmf/common.sh@471 -- # waitforlisten 73660 00:19:51.976 20:11:34 -- common/autotest_common.sh@817 -- # '[' -z 73660 ']' 00:19:51.976 20:11:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.976 20:11:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:51.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.976 20:11:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.976 20:11:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:51.977 20:11:34 -- common/autotest_common.sh@10 -- # set +x 00:19:51.977 [2024-04-24 20:11:34.218888] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:51.977 [2024-04-24 20:11:34.218953] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.236 [2024-04-24 20:11:34.344220] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.236 [2024-04-24 20:11:34.431467] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.236 [2024-04-24 20:11:34.431517] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.236 [2024-04-24 20:11:34.431524] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.236 [2024-04-24 20:11:34.431528] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.236 [2024-04-24 20:11:34.431532] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.236 [2024-04-24 20:11:34.431552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.172 20:11:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:53.172 20:11:35 -- common/autotest_common.sh@850 -- # return 0 00:19:53.172 20:11:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:53.172 20:11:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:53.172 20:11:35 -- common/autotest_common.sh@10 -- # set +x 00:19:53.172 20:11:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.172 20:11:35 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:53.172 20:11:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.172 20:11:35 -- common/autotest_common.sh@10 -- # set +x 00:19:53.172 [2024-04-24 20:11:35.145051] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.172 [2024-04-24 20:11:35.152985] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:53.172 [2024-04-24 20:11:35.153168] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:53.172 null0 00:19:53.172 [2024-04-24 20:11:35.189046] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.172 20:11:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.172 20:11:35 -- host/discovery_remove_ifc.sh@59 -- # hostpid=73702 00:19:53.172 20:11:35 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:53.172 20:11:35 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 73702 /tmp/host.sock 00:19:53.172 20:11:35 -- common/autotest_common.sh@817 -- # '[' -z 73702 ']' 00:19:53.172 20:11:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:19:53.172 20:11:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:53.172 20:11:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:53.172 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:53.173 20:11:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:53.173 20:11:35 -- common/autotest_common.sh@10 -- # set +x 00:19:53.173 [2024-04-24 20:11:35.260361] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:19:53.173 [2024-04-24 20:11:35.260444] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73702 ] 00:19:53.173 [2024-04-24 20:11:35.397940] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.431 [2024-04-24 20:11:35.488413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.997 20:11:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:53.997 20:11:36 -- common/autotest_common.sh@850 -- # return 0 00:19:53.997 20:11:36 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:53.997 20:11:36 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:53.997 20:11:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.997 20:11:36 -- common/autotest_common.sh@10 -- # set +x 00:19:53.997 20:11:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.997 20:11:36 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:53.997 20:11:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.997 20:11:36 -- common/autotest_common.sh@10 -- # set +x 00:19:53.997 20:11:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.997 20:11:36 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:53.997 20:11:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.997 20:11:36 -- common/autotest_common.sh@10 -- # set +x 00:19:55.372 [2024-04-24 20:11:37.203687] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:55.372 [2024-04-24 20:11:37.203715] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:55.372 [2024-04-24 20:11:37.203728] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:55.372 [2024-04-24 20:11:37.209718] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:55.372 [2024-04-24 20:11:37.265283] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:55.372 [2024-04-24 20:11:37.265358] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:55.372 [2024-04-24 20:11:37.265393] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:55.372 [2024-04-24 20:11:37.265410] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:55.372 [2024-04-24 20:11:37.265433] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:55.372 20:11:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:55.372 [2024-04-24 20:11:37.272314] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c85170 was disconnected and freed. delete nvme_qpair. 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:55.372 20:11:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:55.372 20:11:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.372 20:11:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:55.372 20:11:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.372 20:11:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:55.372 20:11:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:55.372 20:11:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:56.306 20:11:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:56.306 20:11:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:56.307 20:11:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:56.307 20:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:56.307 20:11:38 -- common/autotest_common.sh@10 -- # set +x 00:19:56.307 20:11:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:56.307 20:11:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:56.307 20:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.307 20:11:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:56.307 20:11:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:57.240 20:11:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:57.240 20:11:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:57.240 20:11:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.240 20:11:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:57.240 20:11:39 -- common/autotest_common.sh@10 -- # set +x 00:19:57.240 20:11:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:57.240 20:11:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:57.498 20:11:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.498 20:11:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:57.498 20:11:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:58.431 20:11:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:58.431 20:11:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:58.431 20:11:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.431 20:11:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:58.431 20:11:40 -- common/autotest_common.sh@10 -- # set +x 00:19:58.431 20:11:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:58.431 20:11:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:58.431 20:11:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.431 20:11:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:58.431 20:11:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:59.364 20:11:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:59.364 20:11:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:59.364 20:11:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:59.364 20:11:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.364 20:11:41 -- common/autotest_common.sh@10 -- # set +x 00:19:59.364 20:11:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:59.364 20:11:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:59.364 20:11:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.623 20:11:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:59.623 20:11:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:00.657 20:11:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:00.657 20:11:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:00.657 20:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:00.657 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:20:00.657 20:11:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:00.657 20:11:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:00.657 20:11:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:00.657 20:11:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:00.657 [2024-04-24 20:11:42.682919] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:00.657 [2024-04-24 20:11:42.682978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.657 [2024-04-24 20:11:42.682990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.657 [2024-04-24 20:11:42.683001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.657 [2024-04-24 20:11:42.683008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.657 [2024-04-24 20:11:42.683015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.657 [2024-04-24 20:11:42.683022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.657 [2024-04-24 20:11:42.683030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.657 [2024-04-24 20:11:42.683036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.657 [2024-04-24 20:11:42.683044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.657 [2024-04-24 20:11:42.683053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.657 [2024-04-24 20:11:42.683059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4040 is same with the state(5) to be set 00:20:00.657 [2024-04-24 20:11:42.692895] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf4040 (9): Bad file descriptor 00:20:00.657 20:11:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:00.657 20:11:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:00.657 [2024-04-24 20:11:42.702895] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:01.603 20:11:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:01.603 20:11:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:01.603 20:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.603 20:11:43 -- common/autotest_common.sh@10 -- # set +x 00:20:01.603 20:11:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:01.603 20:11:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:01.603 20:11:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:01.603 [2024-04-24 20:11:43.726520] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:02.538 [2024-04-24 20:11:44.750473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:20:03.916 [2024-04-24 20:11:45.774505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:20:03.916 [2024-04-24 20:11:45.774675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bf4040 with addr=10.0.0.2, port=4420 00:20:03.916 [2024-04-24 20:11:45.774716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4040 is same with the state(5) to be set 00:20:03.916 [2024-04-24 20:11:45.775912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf4040 (9): Bad file descriptor 00:20:03.916 [2024-04-24 20:11:45.776014] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:03.916 [2024-04-24 20:11:45.776074] bdev_nvme.c:6657:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:20:03.916 [2024-04-24 20:11:45.776148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.916 [2024-04-24 20:11:45.776189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-04-24 20:11:45.776218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.916 [2024-04-24 20:11:45.776240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-04-24 20:11:45.776267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.916 [2024-04-24 20:11:45.776289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-04-24 20:11:45.776322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.916 [2024-04-24 20:11:45.776344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-04-24 20:11:45.776372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:03.916 [2024-04-24 20:11:45.776426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:03.916 [2024-04-24 20:11:45.776448] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:20:03.916 [2024-04-24 20:11:45.776480] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf3900 (9): Bad file descriptor 00:20:03.916 [2024-04-24 20:11:45.777042] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:03.916 [2024-04-24 20:11:45.777089] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:20:03.916 20:11:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.916 20:11:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:03.916 20:11:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:04.851 20:11:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:04.851 20:11:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:04.851 20:11:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:04.851 20:11:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:04.851 20:11:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:04.852 20:11:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.852 20:11:46 -- common/autotest_common.sh@10 -- # set +x 00:20:04.852 20:11:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.852 20:11:46 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:04.852 20:11:46 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:04.852 20:11:46 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:04.852 20:11:46 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:04.852 20:11:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:04.852 20:11:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:04.852 20:11:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:04.852 20:11:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:04.852 20:11:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:04.852 20:11:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.852 20:11:46 -- common/autotest_common.sh@10 -- # set +x 00:20:04.852 20:11:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.852 20:11:46 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:04.852 20:11:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:05.800 [2024-04-24 20:11:47.777794] bdev_nvme.c:6906:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:05.800 [2024-04-24 20:11:47.777835] bdev_nvme.c:6986:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:05.801 [2024-04-24 20:11:47.777850] bdev_nvme.c:6869:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:05.801 [2024-04-24 20:11:47.783809] bdev_nvme.c:6835:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:20:05.801 [2024-04-24 20:11:47.838841] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:05.801 [2024-04-24 20:11:47.838906] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:05.801 [2024-04-24 20:11:47.838927] bdev_nvme.c:7696:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:05.801 [2024-04-24 20:11:47.838945] bdev_nvme.c:6725:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:20:05.801 [2024-04-24 20:11:47.838953] bdev_nvme.c:6684:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:05.801 [2024-04-24 20:11:47.846478] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c92590 was disconnected and freed. delete nvme_qpair. 00:20:05.801 20:11:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:05.801 20:11:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:05.801 20:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.801 20:11:47 -- common/autotest_common.sh@10 -- # set +x 00:20:05.801 20:11:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:05.801 20:11:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:05.801 20:11:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:05.801 20:11:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.801 20:11:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:05.801 20:11:47 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:05.801 20:11:47 -- host/discovery_remove_ifc.sh@90 -- # killprocess 73702 00:20:05.801 20:11:47 -- common/autotest_common.sh@936 -- # '[' -z 73702 ']' 00:20:05.801 20:11:47 -- common/autotest_common.sh@940 -- # kill -0 73702 00:20:05.801 20:11:47 -- common/autotest_common.sh@941 -- # uname 00:20:05.801 20:11:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:05.801 20:11:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73702 00:20:05.801 killing process with pid 73702 00:20:05.801 20:11:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:05.801 20:11:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:05.801 20:11:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73702' 00:20:05.801 20:11:48 -- common/autotest_common.sh@955 -- # kill 73702 00:20:05.801 20:11:48 -- common/autotest_common.sh@960 -- # wait 73702 00:20:06.060 20:11:48 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:06.060 20:11:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:06.060 20:11:48 -- nvmf/common.sh@117 -- # sync 00:20:06.060 20:11:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.060 20:11:48 -- nvmf/common.sh@120 -- # set +e 00:20:06.060 20:11:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.060 20:11:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.060 rmmod nvme_tcp 00:20:06.060 rmmod nvme_fabrics 00:20:06.060 rmmod nvme_keyring 00:20:06.321 20:11:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.321 20:11:48 -- nvmf/common.sh@124 -- # set -e 00:20:06.321 20:11:48 -- nvmf/common.sh@125 -- # return 0 00:20:06.321 20:11:48 -- nvmf/common.sh@478 -- # '[' -n 73660 ']' 00:20:06.321 20:11:48 -- nvmf/common.sh@479 -- # killprocess 73660 00:20:06.321 20:11:48 -- common/autotest_common.sh@936 -- # '[' -z 73660 ']' 00:20:06.321 20:11:48 -- common/autotest_common.sh@940 -- # kill -0 73660 00:20:06.321 20:11:48 -- common/autotest_common.sh@941 -- # uname 00:20:06.321 20:11:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.321 20:11:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73660 00:20:06.321 killing process with pid 73660 00:20:06.321 20:11:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:06.321 20:11:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:06.321 20:11:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73660' 00:20:06.321 20:11:48 -- common/autotest_common.sh@955 -- # kill 73660 00:20:06.321 [2024-04-24 20:11:48.378892] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:06.321 20:11:48 -- common/autotest_common.sh@960 -- # wait 73660 00:20:06.581 20:11:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:06.581 20:11:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:06.581 20:11:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:06.581 20:11:48 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.581 20:11:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:06.581 20:11:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.581 20:11:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.581 20:11:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.581 20:11:48 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:06.581 00:20:06.581 real 0m14.947s 00:20:06.581 user 0m23.892s 00:20:06.581 sys 0m2.457s 00:20:06.581 20:11:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:06.581 20:11:48 -- common/autotest_common.sh@10 -- # set +x 00:20:06.581 ************************************ 00:20:06.581 END TEST nvmf_discovery_remove_ifc 00:20:06.581 ************************************ 00:20:06.581 20:11:48 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:06.581 20:11:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:06.581 20:11:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:06.581 20:11:48 -- common/autotest_common.sh@10 -- # set +x 00:20:06.581 ************************************ 00:20:06.581 START TEST nvmf_identify_kernel_target 00:20:06.581 ************************************ 00:20:06.581 20:11:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:06.843 * Looking for test storage... 00:20:06.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:06.843 20:11:48 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:06.843 20:11:48 -- nvmf/common.sh@7 -- # uname -s 00:20:06.843 20:11:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.843 20:11:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.843 20:11:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.843 20:11:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.843 20:11:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.843 20:11:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.843 20:11:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.843 20:11:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.843 20:11:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.843 20:11:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.843 20:11:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:20:06.843 20:11:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:20:06.843 20:11:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.843 20:11:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.843 20:11:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:06.843 20:11:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.843 20:11:48 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:06.843 20:11:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.843 20:11:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.843 20:11:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.843 20:11:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.843 20:11:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.843 20:11:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.843 20:11:48 -- paths/export.sh@5 -- # export PATH 00:20:06.843 20:11:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.843 20:11:48 -- nvmf/common.sh@47 -- # : 0 00:20:06.843 20:11:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:06.843 20:11:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:06.843 20:11:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.843 20:11:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.843 20:11:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.843 20:11:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:06.843 20:11:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:06.843 20:11:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:06.844 20:11:48 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:06.844 20:11:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:06.844 20:11:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.844 20:11:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:06.844 20:11:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:06.844 20:11:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:06.844 20:11:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.844 20:11:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.844 20:11:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.844 20:11:48 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:06.844 20:11:48 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:06.844 20:11:48 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:06.844 20:11:48 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:06.844 20:11:48 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:06.844 20:11:48 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:06.844 20:11:48 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.844 20:11:48 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.844 20:11:48 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:06.844 20:11:48 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:06.844 20:11:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:06.844 20:11:48 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:06.844 20:11:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:06.844 20:11:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.844 20:11:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:06.844 20:11:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:06.844 20:11:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:06.844 20:11:48 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:06.844 20:11:48 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:06.844 20:11:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:06.844 Cannot find device "nvmf_tgt_br" 00:20:06.844 20:11:49 -- nvmf/common.sh@155 -- # true 00:20:06.844 20:11:49 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:06.844 Cannot find device "nvmf_tgt_br2" 00:20:06.844 20:11:49 -- nvmf/common.sh@156 -- # true 00:20:06.844 20:11:49 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:06.844 20:11:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:06.844 Cannot find device "nvmf_tgt_br" 00:20:06.844 20:11:49 -- nvmf/common.sh@158 -- # true 00:20:06.844 20:11:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:06.844 Cannot find device "nvmf_tgt_br2" 00:20:06.844 20:11:49 -- nvmf/common.sh@159 -- # true 00:20:06.844 20:11:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:07.104 20:11:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:07.104 20:11:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:07.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:07.104 20:11:49 -- nvmf/common.sh@162 -- # true 00:20:07.104 20:11:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:07.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:07.104 20:11:49 -- nvmf/common.sh@163 -- # true 00:20:07.104 20:11:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:07.104 20:11:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:07.104 20:11:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:07.104 20:11:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:07.104 20:11:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:07.104 20:11:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:07.104 20:11:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:07.104 20:11:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:07.104 20:11:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:07.104 20:11:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:07.104 20:11:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:07.104 20:11:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:07.104 20:11:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:07.104 20:11:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:07.104 20:11:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:07.104 20:11:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:07.104 20:11:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:07.104 20:11:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:07.104 20:11:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:07.104 20:11:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:07.104 20:11:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:07.104 20:11:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:07.104 20:11:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:07.104 20:11:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:07.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:20:07.104 00:20:07.104 --- 10.0.0.2 ping statistics --- 00:20:07.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.104 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:20:07.104 20:11:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:07.104 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:07.104 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:20:07.104 00:20:07.104 --- 10.0.0.3 ping statistics --- 00:20:07.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.104 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:07.104 20:11:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:07.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:20:07.104 00:20:07.104 --- 10.0.0.1 ping statistics --- 00:20:07.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.104 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:07.104 20:11:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.104 20:11:49 -- nvmf/common.sh@422 -- # return 0 00:20:07.104 20:11:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:07.104 20:11:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.104 20:11:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:07.104 20:11:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:07.104 20:11:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.104 20:11:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:07.104 20:11:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:07.104 20:11:49 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:07.104 20:11:49 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:07.104 20:11:49 -- nvmf/common.sh@717 -- # local ip 00:20:07.104 20:11:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:07.104 20:11:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:07.104 20:11:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.104 20:11:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.104 20:11:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:07.104 20:11:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.104 20:11:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:07.104 20:11:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:07.104 20:11:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:07.104 20:11:49 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:07.104 20:11:49 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:07.104 20:11:49 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:07.104 20:11:49 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:20:07.104 20:11:49 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:07.104 20:11:49 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:07.104 20:11:49 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:07.104 20:11:49 -- nvmf/common.sh@628 -- # local block nvme 00:20:07.104 20:11:49 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:20:07.104 20:11:49 -- nvmf/common.sh@631 -- # modprobe nvmet 00:20:07.104 20:11:49 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:07.104 20:11:49 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:07.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:07.674 Waiting for block devices as requested 00:20:07.674 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:07.934 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:07.934 20:11:50 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:07.934 20:11:50 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:07.934 20:11:50 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:20:07.934 20:11:50 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:07.934 20:11:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:07.934 20:11:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:07.934 20:11:50 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:20:07.934 20:11:50 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:07.934 20:11:50 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:07.934 No valid GPT data, bailing 00:20:07.934 20:11:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:07.934 20:11:50 -- scripts/common.sh@391 -- # pt= 00:20:07.934 20:11:50 -- scripts/common.sh@392 -- # return 1 00:20:07.934 20:11:50 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:20:07.934 20:11:50 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:07.934 20:11:50 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:07.934 20:11:50 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:20:07.934 20:11:50 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:07.934 20:11:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:07.934 20:11:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:07.934 20:11:50 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:20:07.934 20:11:50 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:07.934 20:11:50 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:08.194 No valid GPT data, bailing 00:20:08.194 20:11:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:08.194 20:11:50 -- scripts/common.sh@391 -- # pt= 00:20:08.194 20:11:50 -- scripts/common.sh@392 -- # return 1 00:20:08.194 20:11:50 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:20:08.194 20:11:50 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:08.194 20:11:50 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:08.194 20:11:50 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:20:08.194 20:11:50 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:08.194 20:11:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:08.194 20:11:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:08.194 20:11:50 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:20:08.194 20:11:50 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:08.194 20:11:50 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:08.194 No valid GPT data, bailing 00:20:08.194 20:11:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:08.194 20:11:50 -- scripts/common.sh@391 -- # pt= 00:20:08.194 20:11:50 -- scripts/common.sh@392 -- # return 1 00:20:08.194 20:11:50 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:20:08.194 20:11:50 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:08.194 20:11:50 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:08.194 20:11:50 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:20:08.194 20:11:50 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:08.194 20:11:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:08.194 20:11:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:08.194 20:11:50 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:20:08.194 20:11:50 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:08.194 20:11:50 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:08.194 No valid GPT data, bailing 00:20:08.194 20:11:50 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:08.194 20:11:50 -- scripts/common.sh@391 -- # pt= 00:20:08.194 20:11:50 -- scripts/common.sh@392 -- # return 1 00:20:08.194 20:11:50 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:20:08.194 20:11:50 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:20:08.194 20:11:50 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:08.194 20:11:50 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:08.194 20:11:50 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:08.194 20:11:50 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:08.194 20:11:50 -- nvmf/common.sh@656 -- # echo 1 00:20:08.194 20:11:50 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:20:08.194 20:11:50 -- nvmf/common.sh@658 -- # echo 1 00:20:08.194 20:11:50 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:20:08.194 20:11:50 -- nvmf/common.sh@661 -- # echo tcp 00:20:08.194 20:11:50 -- nvmf/common.sh@662 -- # echo 4420 00:20:08.194 20:11:50 -- nvmf/common.sh@663 -- # echo ipv4 00:20:08.194 20:11:50 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:08.194 20:11:50 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf --hostid=19152f61-83a6-4d7e-88f6-d601ac0cc1cf -a 10.0.0.1 -t tcp -s 4420 00:20:08.454 00:20:08.454 Discovery Log Number of Records 2, Generation counter 2 00:20:08.454 =====Discovery Log Entry 0====== 00:20:08.454 trtype: tcp 00:20:08.454 adrfam: ipv4 00:20:08.454 subtype: current discovery subsystem 00:20:08.454 treq: not specified, sq flow control disable supported 00:20:08.454 portid: 1 00:20:08.454 trsvcid: 4420 00:20:08.454 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:08.454 traddr: 10.0.0.1 00:20:08.454 eflags: none 00:20:08.454 sectype: none 00:20:08.454 =====Discovery Log Entry 1====== 00:20:08.454 trtype: tcp 00:20:08.454 adrfam: ipv4 00:20:08.454 subtype: nvme subsystem 00:20:08.454 treq: not specified, sq flow control disable supported 00:20:08.454 portid: 1 00:20:08.454 trsvcid: 4420 00:20:08.454 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:08.454 traddr: 10.0.0.1 00:20:08.454 eflags: none 00:20:08.454 sectype: none 00:20:08.454 20:11:50 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:08.454 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:08.454 ===================================================== 00:20:08.454 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:08.454 ===================================================== 00:20:08.454 Controller Capabilities/Features 00:20:08.454 ================================ 00:20:08.454 Vendor ID: 0000 00:20:08.454 Subsystem Vendor ID: 0000 00:20:08.454 Serial Number: 269024145bdedfffa9d1 00:20:08.454 Model Number: Linux 00:20:08.454 Firmware Version: 6.7.0-68 00:20:08.454 Recommended Arb Burst: 0 00:20:08.454 IEEE OUI Identifier: 00 00 00 00:20:08.454 Multi-path I/O 00:20:08.454 May have multiple subsystem ports: No 00:20:08.454 May have multiple controllers: No 00:20:08.454 Associated with SR-IOV VF: No 00:20:08.454 Max Data Transfer Size: Unlimited 00:20:08.454 Max Number of Namespaces: 0 00:20:08.454 Max Number of I/O Queues: 1024 00:20:08.454 NVMe Specification Version (VS): 1.3 00:20:08.454 NVMe Specification Version (Identify): 1.3 00:20:08.454 Maximum Queue Entries: 1024 00:20:08.454 Contiguous Queues Required: No 00:20:08.454 Arbitration Mechanisms Supported 00:20:08.454 Weighted Round Robin: Not Supported 00:20:08.454 Vendor Specific: Not Supported 00:20:08.454 Reset Timeout: 7500 ms 00:20:08.454 Doorbell Stride: 4 bytes 00:20:08.454 NVM Subsystem Reset: Not Supported 00:20:08.454 Command Sets Supported 00:20:08.454 NVM Command Set: Supported 00:20:08.454 Boot Partition: Not Supported 00:20:08.454 Memory Page Size Minimum: 4096 bytes 00:20:08.454 Memory Page Size Maximum: 4096 bytes 00:20:08.454 Persistent Memory Region: Not Supported 00:20:08.454 Optional Asynchronous Events Supported 00:20:08.454 Namespace Attribute Notices: Not Supported 00:20:08.454 Firmware Activation Notices: Not Supported 00:20:08.454 ANA Change Notices: Not Supported 00:20:08.454 PLE Aggregate Log Change Notices: Not Supported 00:20:08.454 LBA Status Info Alert Notices: Not Supported 00:20:08.454 EGE Aggregate Log Change Notices: Not Supported 00:20:08.454 Normal NVM Subsystem Shutdown event: Not Supported 00:20:08.454 Zone Descriptor Change Notices: Not Supported 00:20:08.454 Discovery Log Change Notices: Supported 00:20:08.454 Controller Attributes 00:20:08.454 128-bit Host Identifier: Not Supported 00:20:08.454 Non-Operational Permissive Mode: Not Supported 00:20:08.454 NVM Sets: Not Supported 00:20:08.454 Read Recovery Levels: Not Supported 00:20:08.454 Endurance Groups: Not Supported 00:20:08.454 Predictable Latency Mode: Not Supported 00:20:08.454 Traffic Based Keep ALive: Not Supported 00:20:08.454 Namespace Granularity: Not Supported 00:20:08.454 SQ Associations: Not Supported 00:20:08.454 UUID List: Not Supported 00:20:08.454 Multi-Domain Subsystem: Not Supported 00:20:08.454 Fixed Capacity Management: Not Supported 00:20:08.454 Variable Capacity Management: Not Supported 00:20:08.454 Delete Endurance Group: Not Supported 00:20:08.454 Delete NVM Set: Not Supported 00:20:08.454 Extended LBA Formats Supported: Not Supported 00:20:08.454 Flexible Data Placement Supported: Not Supported 00:20:08.454 00:20:08.454 Controller Memory Buffer Support 00:20:08.454 ================================ 00:20:08.454 Supported: No 00:20:08.454 00:20:08.454 Persistent Memory Region Support 00:20:08.454 ================================ 00:20:08.454 Supported: No 00:20:08.454 00:20:08.454 Admin Command Set Attributes 00:20:08.454 ============================ 00:20:08.454 Security Send/Receive: Not Supported 00:20:08.454 Format NVM: Not Supported 00:20:08.454 Firmware Activate/Download: Not Supported 00:20:08.454 Namespace Management: Not Supported 00:20:08.454 Device Self-Test: Not Supported 00:20:08.454 Directives: Not Supported 00:20:08.454 NVMe-MI: Not Supported 00:20:08.454 Virtualization Management: Not Supported 00:20:08.454 Doorbell Buffer Config: Not Supported 00:20:08.454 Get LBA Status Capability: Not Supported 00:20:08.454 Command & Feature Lockdown Capability: Not Supported 00:20:08.454 Abort Command Limit: 1 00:20:08.454 Async Event Request Limit: 1 00:20:08.454 Number of Firmware Slots: N/A 00:20:08.454 Firmware Slot 1 Read-Only: N/A 00:20:08.454 Firmware Activation Without Reset: N/A 00:20:08.454 Multiple Update Detection Support: N/A 00:20:08.454 Firmware Update Granularity: No Information Provided 00:20:08.454 Per-Namespace SMART Log: No 00:20:08.454 Asymmetric Namespace Access Log Page: Not Supported 00:20:08.454 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:08.454 Command Effects Log Page: Not Supported 00:20:08.454 Get Log Page Extended Data: Supported 00:20:08.454 Telemetry Log Pages: Not Supported 00:20:08.454 Persistent Event Log Pages: Not Supported 00:20:08.454 Supported Log Pages Log Page: May Support 00:20:08.454 Commands Supported & Effects Log Page: Not Supported 00:20:08.454 Feature Identifiers & Effects Log Page:May Support 00:20:08.454 NVMe-MI Commands & Effects Log Page: May Support 00:20:08.454 Data Area 4 for Telemetry Log: Not Supported 00:20:08.454 Error Log Page Entries Supported: 1 00:20:08.454 Keep Alive: Not Supported 00:20:08.454 00:20:08.454 NVM Command Set Attributes 00:20:08.454 ========================== 00:20:08.454 Submission Queue Entry Size 00:20:08.454 Max: 1 00:20:08.454 Min: 1 00:20:08.454 Completion Queue Entry Size 00:20:08.454 Max: 1 00:20:08.454 Min: 1 00:20:08.454 Number of Namespaces: 0 00:20:08.454 Compare Command: Not Supported 00:20:08.454 Write Uncorrectable Command: Not Supported 00:20:08.454 Dataset Management Command: Not Supported 00:20:08.454 Write Zeroes Command: Not Supported 00:20:08.454 Set Features Save Field: Not Supported 00:20:08.454 Reservations: Not Supported 00:20:08.454 Timestamp: Not Supported 00:20:08.454 Copy: Not Supported 00:20:08.454 Volatile Write Cache: Not Present 00:20:08.454 Atomic Write Unit (Normal): 1 00:20:08.454 Atomic Write Unit (PFail): 1 00:20:08.454 Atomic Compare & Write Unit: 1 00:20:08.454 Fused Compare & Write: Not Supported 00:20:08.454 Scatter-Gather List 00:20:08.454 SGL Command Set: Supported 00:20:08.454 SGL Keyed: Not Supported 00:20:08.454 SGL Bit Bucket Descriptor: Not Supported 00:20:08.454 SGL Metadata Pointer: Not Supported 00:20:08.454 Oversized SGL: Not Supported 00:20:08.454 SGL Metadata Address: Not Supported 00:20:08.454 SGL Offset: Supported 00:20:08.454 Transport SGL Data Block: Not Supported 00:20:08.454 Replay Protected Memory Block: Not Supported 00:20:08.454 00:20:08.454 Firmware Slot Information 00:20:08.454 ========================= 00:20:08.454 Active slot: 0 00:20:08.454 00:20:08.454 00:20:08.454 Error Log 00:20:08.454 ========= 00:20:08.454 00:20:08.454 Active Namespaces 00:20:08.454 ================= 00:20:08.454 Discovery Log Page 00:20:08.454 ================== 00:20:08.454 Generation Counter: 2 00:20:08.454 Number of Records: 2 00:20:08.454 Record Format: 0 00:20:08.454 00:20:08.454 Discovery Log Entry 0 00:20:08.454 ---------------------- 00:20:08.454 Transport Type: 3 (TCP) 00:20:08.454 Address Family: 1 (IPv4) 00:20:08.454 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:08.454 Entry Flags: 00:20:08.454 Duplicate Returned Information: 0 00:20:08.454 Explicit Persistent Connection Support for Discovery: 0 00:20:08.454 Transport Requirements: 00:20:08.454 Secure Channel: Not Specified 00:20:08.454 Port ID: 1 (0x0001) 00:20:08.455 Controller ID: 65535 (0xffff) 00:20:08.455 Admin Max SQ Size: 32 00:20:08.455 Transport Service Identifier: 4420 00:20:08.455 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:08.455 Transport Address: 10.0.0.1 00:20:08.455 Discovery Log Entry 1 00:20:08.455 ---------------------- 00:20:08.455 Transport Type: 3 (TCP) 00:20:08.455 Address Family: 1 (IPv4) 00:20:08.455 Subsystem Type: 2 (NVM Subsystem) 00:20:08.455 Entry Flags: 00:20:08.455 Duplicate Returned Information: 0 00:20:08.455 Explicit Persistent Connection Support for Discovery: 0 00:20:08.455 Transport Requirements: 00:20:08.455 Secure Channel: Not Specified 00:20:08.455 Port ID: 1 (0x0001) 00:20:08.455 Controller ID: 65535 (0xffff) 00:20:08.455 Admin Max SQ Size: 32 00:20:08.455 Transport Service Identifier: 4420 00:20:08.455 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:08.455 Transport Address: 10.0.0.1 00:20:08.455 20:11:50 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:08.715 get_feature(0x01) failed 00:20:08.715 get_feature(0x02) failed 00:20:08.715 get_feature(0x04) failed 00:20:08.715 ===================================================== 00:20:08.715 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:08.715 ===================================================== 00:20:08.715 Controller Capabilities/Features 00:20:08.715 ================================ 00:20:08.715 Vendor ID: 0000 00:20:08.715 Subsystem Vendor ID: 0000 00:20:08.715 Serial Number: 89d197780ba5cb75c741 00:20:08.715 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:08.715 Firmware Version: 6.7.0-68 00:20:08.715 Recommended Arb Burst: 6 00:20:08.715 IEEE OUI Identifier: 00 00 00 00:20:08.715 Multi-path I/O 00:20:08.715 May have multiple subsystem ports: Yes 00:20:08.715 May have multiple controllers: Yes 00:20:08.715 Associated with SR-IOV VF: No 00:20:08.715 Max Data Transfer Size: Unlimited 00:20:08.715 Max Number of Namespaces: 1024 00:20:08.715 Max Number of I/O Queues: 128 00:20:08.715 NVMe Specification Version (VS): 1.3 00:20:08.715 NVMe Specification Version (Identify): 1.3 00:20:08.715 Maximum Queue Entries: 1024 00:20:08.715 Contiguous Queues Required: No 00:20:08.715 Arbitration Mechanisms Supported 00:20:08.715 Weighted Round Robin: Not Supported 00:20:08.715 Vendor Specific: Not Supported 00:20:08.715 Reset Timeout: 7500 ms 00:20:08.715 Doorbell Stride: 4 bytes 00:20:08.715 NVM Subsystem Reset: Not Supported 00:20:08.715 Command Sets Supported 00:20:08.715 NVM Command Set: Supported 00:20:08.715 Boot Partition: Not Supported 00:20:08.715 Memory Page Size Minimum: 4096 bytes 00:20:08.715 Memory Page Size Maximum: 4096 bytes 00:20:08.715 Persistent Memory Region: Not Supported 00:20:08.715 Optional Asynchronous Events Supported 00:20:08.715 Namespace Attribute Notices: Supported 00:20:08.715 Firmware Activation Notices: Not Supported 00:20:08.715 ANA Change Notices: Supported 00:20:08.715 PLE Aggregate Log Change Notices: Not Supported 00:20:08.715 LBA Status Info Alert Notices: Not Supported 00:20:08.715 EGE Aggregate Log Change Notices: Not Supported 00:20:08.715 Normal NVM Subsystem Shutdown event: Not Supported 00:20:08.715 Zone Descriptor Change Notices: Not Supported 00:20:08.715 Discovery Log Change Notices: Not Supported 00:20:08.715 Controller Attributes 00:20:08.715 128-bit Host Identifier: Supported 00:20:08.715 Non-Operational Permissive Mode: Not Supported 00:20:08.715 NVM Sets: Not Supported 00:20:08.715 Read Recovery Levels: Not Supported 00:20:08.715 Endurance Groups: Not Supported 00:20:08.715 Predictable Latency Mode: Not Supported 00:20:08.715 Traffic Based Keep ALive: Supported 00:20:08.715 Namespace Granularity: Not Supported 00:20:08.715 SQ Associations: Not Supported 00:20:08.715 UUID List: Not Supported 00:20:08.715 Multi-Domain Subsystem: Not Supported 00:20:08.715 Fixed Capacity Management: Not Supported 00:20:08.715 Variable Capacity Management: Not Supported 00:20:08.715 Delete Endurance Group: Not Supported 00:20:08.715 Delete NVM Set: Not Supported 00:20:08.715 Extended LBA Formats Supported: Not Supported 00:20:08.715 Flexible Data Placement Supported: Not Supported 00:20:08.715 00:20:08.715 Controller Memory Buffer Support 00:20:08.715 ================================ 00:20:08.715 Supported: No 00:20:08.715 00:20:08.715 Persistent Memory Region Support 00:20:08.715 ================================ 00:20:08.715 Supported: No 00:20:08.715 00:20:08.715 Admin Command Set Attributes 00:20:08.715 ============================ 00:20:08.715 Security Send/Receive: Not Supported 00:20:08.715 Format NVM: Not Supported 00:20:08.715 Firmware Activate/Download: Not Supported 00:20:08.715 Namespace Management: Not Supported 00:20:08.715 Device Self-Test: Not Supported 00:20:08.715 Directives: Not Supported 00:20:08.715 NVMe-MI: Not Supported 00:20:08.715 Virtualization Management: Not Supported 00:20:08.715 Doorbell Buffer Config: Not Supported 00:20:08.715 Get LBA Status Capability: Not Supported 00:20:08.715 Command & Feature Lockdown Capability: Not Supported 00:20:08.715 Abort Command Limit: 4 00:20:08.715 Async Event Request Limit: 4 00:20:08.715 Number of Firmware Slots: N/A 00:20:08.715 Firmware Slot 1 Read-Only: N/A 00:20:08.715 Firmware Activation Without Reset: N/A 00:20:08.715 Multiple Update Detection Support: N/A 00:20:08.715 Firmware Update Granularity: No Information Provided 00:20:08.715 Per-Namespace SMART Log: Yes 00:20:08.715 Asymmetric Namespace Access Log Page: Supported 00:20:08.715 ANA Transition Time : 10 sec 00:20:08.715 00:20:08.715 Asymmetric Namespace Access Capabilities 00:20:08.715 ANA Optimized State : Supported 00:20:08.715 ANA Non-Optimized State : Supported 00:20:08.715 ANA Inaccessible State : Supported 00:20:08.715 ANA Persistent Loss State : Supported 00:20:08.715 ANA Change State : Supported 00:20:08.715 ANAGRPID is not changed : No 00:20:08.715 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:08.715 00:20:08.715 ANA Group Identifier Maximum : 128 00:20:08.715 Number of ANA Group Identifiers : 128 00:20:08.715 Max Number of Allowed Namespaces : 1024 00:20:08.715 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:08.715 Command Effects Log Page: Supported 00:20:08.715 Get Log Page Extended Data: Supported 00:20:08.715 Telemetry Log Pages: Not Supported 00:20:08.715 Persistent Event Log Pages: Not Supported 00:20:08.715 Supported Log Pages Log Page: May Support 00:20:08.715 Commands Supported & Effects Log Page: Not Supported 00:20:08.715 Feature Identifiers & Effects Log Page:May Support 00:20:08.715 NVMe-MI Commands & Effects Log Page: May Support 00:20:08.715 Data Area 4 for Telemetry Log: Not Supported 00:20:08.715 Error Log Page Entries Supported: 128 00:20:08.715 Keep Alive: Supported 00:20:08.715 Keep Alive Granularity: 1000 ms 00:20:08.715 00:20:08.715 NVM Command Set Attributes 00:20:08.715 ========================== 00:20:08.715 Submission Queue Entry Size 00:20:08.715 Max: 64 00:20:08.715 Min: 64 00:20:08.715 Completion Queue Entry Size 00:20:08.715 Max: 16 00:20:08.715 Min: 16 00:20:08.715 Number of Namespaces: 1024 00:20:08.715 Compare Command: Not Supported 00:20:08.715 Write Uncorrectable Command: Not Supported 00:20:08.715 Dataset Management Command: Supported 00:20:08.715 Write Zeroes Command: Supported 00:20:08.715 Set Features Save Field: Not Supported 00:20:08.715 Reservations: Not Supported 00:20:08.715 Timestamp: Not Supported 00:20:08.715 Copy: Not Supported 00:20:08.715 Volatile Write Cache: Present 00:20:08.715 Atomic Write Unit (Normal): 1 00:20:08.715 Atomic Write Unit (PFail): 1 00:20:08.715 Atomic Compare & Write Unit: 1 00:20:08.715 Fused Compare & Write: Not Supported 00:20:08.715 Scatter-Gather List 00:20:08.715 SGL Command Set: Supported 00:20:08.715 SGL Keyed: Not Supported 00:20:08.715 SGL Bit Bucket Descriptor: Not Supported 00:20:08.715 SGL Metadata Pointer: Not Supported 00:20:08.715 Oversized SGL: Not Supported 00:20:08.715 SGL Metadata Address: Not Supported 00:20:08.715 SGL Offset: Supported 00:20:08.715 Transport SGL Data Block: Not Supported 00:20:08.715 Replay Protected Memory Block: Not Supported 00:20:08.715 00:20:08.715 Firmware Slot Information 00:20:08.715 ========================= 00:20:08.715 Active slot: 0 00:20:08.715 00:20:08.715 Asymmetric Namespace Access 00:20:08.715 =========================== 00:20:08.715 Change Count : 0 00:20:08.715 Number of ANA Group Descriptors : 1 00:20:08.715 ANA Group Descriptor : 0 00:20:08.715 ANA Group ID : 1 00:20:08.715 Number of NSID Values : 1 00:20:08.715 Change Count : 0 00:20:08.715 ANA State : 1 00:20:08.715 Namespace Identifier : 1 00:20:08.715 00:20:08.715 Commands Supported and Effects 00:20:08.715 ============================== 00:20:08.715 Admin Commands 00:20:08.715 -------------- 00:20:08.715 Get Log Page (02h): Supported 00:20:08.715 Identify (06h): Supported 00:20:08.715 Abort (08h): Supported 00:20:08.715 Set Features (09h): Supported 00:20:08.715 Get Features (0Ah): Supported 00:20:08.715 Asynchronous Event Request (0Ch): Supported 00:20:08.715 Keep Alive (18h): Supported 00:20:08.715 I/O Commands 00:20:08.715 ------------ 00:20:08.715 Flush (00h): Supported 00:20:08.715 Write (01h): Supported LBA-Change 00:20:08.715 Read (02h): Supported 00:20:08.715 Write Zeroes (08h): Supported LBA-Change 00:20:08.715 Dataset Management (09h): Supported 00:20:08.715 00:20:08.715 Error Log 00:20:08.715 ========= 00:20:08.715 Entry: 0 00:20:08.715 Error Count: 0x3 00:20:08.716 Submission Queue Id: 0x0 00:20:08.716 Command Id: 0x5 00:20:08.716 Phase Bit: 0 00:20:08.716 Status Code: 0x2 00:20:08.716 Status Code Type: 0x0 00:20:08.716 Do Not Retry: 1 00:20:08.716 Error Location: 0x28 00:20:08.716 LBA: 0x0 00:20:08.716 Namespace: 0x0 00:20:08.716 Vendor Log Page: 0x0 00:20:08.716 ----------- 00:20:08.716 Entry: 1 00:20:08.716 Error Count: 0x2 00:20:08.716 Submission Queue Id: 0x0 00:20:08.716 Command Id: 0x5 00:20:08.716 Phase Bit: 0 00:20:08.716 Status Code: 0x2 00:20:08.716 Status Code Type: 0x0 00:20:08.716 Do Not Retry: 1 00:20:08.716 Error Location: 0x28 00:20:08.716 LBA: 0x0 00:20:08.716 Namespace: 0x0 00:20:08.716 Vendor Log Page: 0x0 00:20:08.716 ----------- 00:20:08.716 Entry: 2 00:20:08.716 Error Count: 0x1 00:20:08.716 Submission Queue Id: 0x0 00:20:08.716 Command Id: 0x4 00:20:08.716 Phase Bit: 0 00:20:08.716 Status Code: 0x2 00:20:08.716 Status Code Type: 0x0 00:20:08.716 Do Not Retry: 1 00:20:08.716 Error Location: 0x28 00:20:08.716 LBA: 0x0 00:20:08.716 Namespace: 0x0 00:20:08.716 Vendor Log Page: 0x0 00:20:08.716 00:20:08.716 Number of Queues 00:20:08.716 ================ 00:20:08.716 Number of I/O Submission Queues: 128 00:20:08.716 Number of I/O Completion Queues: 128 00:20:08.716 00:20:08.716 ZNS Specific Controller Data 00:20:08.716 ============================ 00:20:08.716 Zone Append Size Limit: 0 00:20:08.716 00:20:08.716 00:20:08.716 Active Namespaces 00:20:08.716 ================= 00:20:08.716 get_feature(0x05) failed 00:20:08.716 Namespace ID:1 00:20:08.716 Command Set Identifier: NVM (00h) 00:20:08.716 Deallocate: Supported 00:20:08.716 Deallocated/Unwritten Error: Not Supported 00:20:08.716 Deallocated Read Value: Unknown 00:20:08.716 Deallocate in Write Zeroes: Not Supported 00:20:08.716 Deallocated Guard Field: 0xFFFF 00:20:08.716 Flush: Supported 00:20:08.716 Reservation: Not Supported 00:20:08.716 Namespace Sharing Capabilities: Multiple Controllers 00:20:08.716 Size (in LBAs): 1310720 (5GiB) 00:20:08.716 Capacity (in LBAs): 1310720 (5GiB) 00:20:08.716 Utilization (in LBAs): 1310720 (5GiB) 00:20:08.716 UUID: 3c95b10a-8f6d-47a5-9c4c-f7e27752c6fb 00:20:08.716 Thin Provisioning: Not Supported 00:20:08.716 Per-NS Atomic Units: Yes 00:20:08.716 Atomic Boundary Size (Normal): 0 00:20:08.716 Atomic Boundary Size (PFail): 0 00:20:08.716 Atomic Boundary Offset: 0 00:20:08.716 NGUID/EUI64 Never Reused: No 00:20:08.716 ANA group ID: 1 00:20:08.716 Namespace Write Protected: No 00:20:08.716 Number of LBA Formats: 1 00:20:08.716 Current LBA Format: LBA Format #00 00:20:08.716 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:08.716 00:20:08.716 20:11:50 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:08.716 20:11:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:08.716 20:11:50 -- nvmf/common.sh@117 -- # sync 00:20:08.716 20:11:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:08.716 20:11:50 -- nvmf/common.sh@120 -- # set +e 00:20:08.716 20:11:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:08.716 20:11:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:08.716 rmmod nvme_tcp 00:20:08.716 rmmod nvme_fabrics 00:20:08.716 20:11:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:08.716 20:11:50 -- nvmf/common.sh@124 -- # set -e 00:20:08.716 20:11:50 -- nvmf/common.sh@125 -- # return 0 00:20:08.716 20:11:50 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:20:08.716 20:11:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:08.716 20:11:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:08.716 20:11:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:08.716 20:11:50 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.716 20:11:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:08.716 20:11:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.716 20:11:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.716 20:11:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.716 20:11:50 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:08.716 20:11:50 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:08.716 20:11:50 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:08.716 20:11:50 -- nvmf/common.sh@675 -- # echo 0 00:20:08.976 20:11:50 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:08.976 20:11:50 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:08.976 20:11:50 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:08.976 20:11:50 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:08.976 20:11:50 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:20:08.976 20:11:50 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:20:08.976 20:11:51 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:09.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:09.803 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:09.803 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:09.803 00:20:09.803 real 0m3.223s 00:20:09.803 user 0m1.107s 00:20:09.803 sys 0m1.672s 00:20:09.803 20:11:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:09.803 20:11:52 -- common/autotest_common.sh@10 -- # set +x 00:20:09.803 ************************************ 00:20:09.803 END TEST nvmf_identify_kernel_target 00:20:09.803 ************************************ 00:20:10.063 20:11:52 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:10.063 20:11:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:10.063 20:11:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:10.063 20:11:52 -- common/autotest_common.sh@10 -- # set +x 00:20:10.063 ************************************ 00:20:10.063 START TEST nvmf_auth 00:20:10.063 ************************************ 00:20:10.063 20:11:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:10.063 * Looking for test storage... 00:20:10.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:10.063 20:11:52 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:10.063 20:11:52 -- nvmf/common.sh@7 -- # uname -s 00:20:10.063 20:11:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.063 20:11:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.063 20:11:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.063 20:11:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.063 20:11:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.063 20:11:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.063 20:11:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.063 20:11:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.063 20:11:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.063 20:11:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.063 20:11:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:20:10.063 20:11:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:20:10.063 20:11:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.063 20:11:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.063 20:11:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:10.063 20:11:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.063 20:11:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:10.323 20:11:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.323 20:11:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.323 20:11:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.323 20:11:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.323 20:11:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.323 20:11:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.323 20:11:52 -- paths/export.sh@5 -- # export PATH 00:20:10.323 20:11:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.323 20:11:52 -- nvmf/common.sh@47 -- # : 0 00:20:10.323 20:11:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:10.323 20:11:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:10.323 20:11:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.323 20:11:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.323 20:11:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.323 20:11:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:10.323 20:11:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:10.323 20:11:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:10.323 20:11:52 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:10.323 20:11:52 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:10.323 20:11:52 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:10.323 20:11:52 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:10.323 20:11:52 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:10.323 20:11:52 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:10.323 20:11:52 -- host/auth.sh@21 -- # keys=() 00:20:10.323 20:11:52 -- host/auth.sh@77 -- # nvmftestinit 00:20:10.323 20:11:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:10.323 20:11:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.323 20:11:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:10.323 20:11:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:10.323 20:11:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:10.323 20:11:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.323 20:11:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.323 20:11:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.323 20:11:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:10.323 20:11:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:10.323 20:11:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:10.323 20:11:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:10.323 20:11:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:10.323 20:11:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:10.323 20:11:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.323 20:11:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:10.323 20:11:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:10.323 20:11:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:10.323 20:11:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:10.323 20:11:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:10.323 20:11:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:10.323 20:11:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.323 20:11:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:10.323 20:11:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:10.323 20:11:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:10.323 20:11:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:10.323 20:11:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:10.323 20:11:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:10.323 Cannot find device "nvmf_tgt_br" 00:20:10.323 20:11:52 -- nvmf/common.sh@155 -- # true 00:20:10.323 20:11:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:10.323 Cannot find device "nvmf_tgt_br2" 00:20:10.323 20:11:52 -- nvmf/common.sh@156 -- # true 00:20:10.323 20:11:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:10.323 20:11:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:10.323 Cannot find device "nvmf_tgt_br" 00:20:10.323 20:11:52 -- nvmf/common.sh@158 -- # true 00:20:10.323 20:11:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:10.323 Cannot find device "nvmf_tgt_br2" 00:20:10.323 20:11:52 -- nvmf/common.sh@159 -- # true 00:20:10.323 20:11:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:10.323 20:11:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:10.323 20:11:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:10.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:10.323 20:11:52 -- nvmf/common.sh@162 -- # true 00:20:10.323 20:11:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:10.323 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:10.323 20:11:52 -- nvmf/common.sh@163 -- # true 00:20:10.323 20:11:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:10.323 20:11:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:10.323 20:11:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:10.323 20:11:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:10.323 20:11:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:10.323 20:11:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:10.583 20:11:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:10.583 20:11:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:10.583 20:11:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:10.583 20:11:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:10.583 20:11:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:10.583 20:11:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:10.583 20:11:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:10.583 20:11:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:10.583 20:11:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:10.583 20:11:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:10.583 20:11:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:10.583 20:11:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:10.583 20:11:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:10.583 20:11:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:10.583 20:11:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:10.583 20:11:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:10.583 20:11:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:10.583 20:11:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:10.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:20:10.583 00:20:10.583 --- 10.0.0.2 ping statistics --- 00:20:10.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.583 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:20:10.583 20:11:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:10.583 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:10.583 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:20:10.583 00:20:10.583 --- 10.0.0.3 ping statistics --- 00:20:10.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.583 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:20:10.583 20:11:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:10.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:20:10.583 00:20:10.583 --- 10.0.0.1 ping statistics --- 00:20:10.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.583 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:20:10.583 20:11:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.583 20:11:52 -- nvmf/common.sh@422 -- # return 0 00:20:10.583 20:11:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:10.583 20:11:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:10.583 20:11:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:10.583 20:11:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:10.583 20:11:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:10.583 20:11:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:10.583 20:11:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:10.583 20:11:52 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:20:10.583 20:11:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:10.583 20:11:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:10.583 20:11:52 -- common/autotest_common.sh@10 -- # set +x 00:20:10.583 20:11:52 -- nvmf/common.sh@470 -- # nvmfpid=74603 00:20:10.583 20:11:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:10.583 20:11:52 -- nvmf/common.sh@471 -- # waitforlisten 74603 00:20:10.583 20:11:52 -- common/autotest_common.sh@817 -- # '[' -z 74603 ']' 00:20:10.583 20:11:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.583 20:11:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:10.583 20:11:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.583 20:11:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:10.583 20:11:52 -- common/autotest_common.sh@10 -- # set +x 00:20:11.521 20:11:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:11.521 20:11:53 -- common/autotest_common.sh@850 -- # return 0 00:20:11.521 20:11:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:11.521 20:11:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:11.521 20:11:53 -- common/autotest_common.sh@10 -- # set +x 00:20:11.521 20:11:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.521 20:11:53 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:11.521 20:11:53 -- host/auth.sh@81 -- # gen_key null 32 00:20:11.521 20:11:53 -- host/auth.sh@53 -- # local digest len file key 00:20:11.521 20:11:53 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:11.521 20:11:53 -- host/auth.sh@54 -- # local -A digests 00:20:11.521 20:11:53 -- host/auth.sh@56 -- # digest=null 00:20:11.521 20:11:53 -- host/auth.sh@56 -- # len=32 00:20:11.521 20:11:53 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:11.521 20:11:53 -- host/auth.sh@57 -- # key=f807dc56ca9a6b8ff3ccf639a1180890 00:20:11.521 20:11:53 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:20:11.521 20:11:53 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.0Ie 00:20:11.521 20:11:53 -- host/auth.sh@59 -- # format_dhchap_key f807dc56ca9a6b8ff3ccf639a1180890 0 00:20:11.521 20:11:53 -- nvmf/common.sh@708 -- # format_key DHHC-1 f807dc56ca9a6b8ff3ccf639a1180890 0 00:20:11.521 20:11:53 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:11.521 20:11:53 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:11.521 20:11:53 -- nvmf/common.sh@693 -- # key=f807dc56ca9a6b8ff3ccf639a1180890 00:20:11.521 20:11:53 -- nvmf/common.sh@693 -- # digest=0 00:20:11.521 20:11:53 -- nvmf/common.sh@694 -- # python - 00:20:11.521 20:11:53 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.0Ie 00:20:11.522 20:11:53 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.0Ie 00:20:11.522 20:11:53 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.0Ie 00:20:11.522 20:11:53 -- host/auth.sh@82 -- # gen_key null 48 00:20:11.522 20:11:53 -- host/auth.sh@53 -- # local digest len file key 00:20:11.522 20:11:53 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:11.522 20:11:53 -- host/auth.sh@54 -- # local -A digests 00:20:11.522 20:11:53 -- host/auth.sh@56 -- # digest=null 00:20:11.522 20:11:53 -- host/auth.sh@56 -- # len=48 00:20:11.522 20:11:53 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:11.522 20:11:53 -- host/auth.sh@57 -- # key=2f98e51e82195f1e6df9c6c1b4473ab742e5a36891eba0d3 00:20:11.522 20:11:53 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:20:11.522 20:11:53 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.nBK 00:20:11.522 20:11:53 -- host/auth.sh@59 -- # format_dhchap_key 2f98e51e82195f1e6df9c6c1b4473ab742e5a36891eba0d3 0 00:20:11.522 20:11:53 -- nvmf/common.sh@708 -- # format_key DHHC-1 2f98e51e82195f1e6df9c6c1b4473ab742e5a36891eba0d3 0 00:20:11.522 20:11:53 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:11.522 20:11:53 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:11.522 20:11:53 -- nvmf/common.sh@693 -- # key=2f98e51e82195f1e6df9c6c1b4473ab742e5a36891eba0d3 00:20:11.781 20:11:53 -- nvmf/common.sh@693 -- # digest=0 00:20:11.781 20:11:53 -- nvmf/common.sh@694 -- # python - 00:20:11.781 20:11:53 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.nBK 00:20:11.781 20:11:53 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.nBK 00:20:11.781 20:11:53 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.nBK 00:20:11.781 20:11:53 -- host/auth.sh@83 -- # gen_key sha256 32 00:20:11.781 20:11:53 -- host/auth.sh@53 -- # local digest len file key 00:20:11.781 20:11:53 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:11.781 20:11:53 -- host/auth.sh@54 -- # local -A digests 00:20:11.781 20:11:53 -- host/auth.sh@56 -- # digest=sha256 00:20:11.781 20:11:53 -- host/auth.sh@56 -- # len=32 00:20:11.781 20:11:53 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:11.781 20:11:53 -- host/auth.sh@57 -- # key=9840ecd2a87542f34cbcb56b3e552a94 00:20:11.781 20:11:53 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:20:11.781 20:11:53 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.5WU 00:20:11.781 20:11:53 -- host/auth.sh@59 -- # format_dhchap_key 9840ecd2a87542f34cbcb56b3e552a94 1 00:20:11.781 20:11:53 -- nvmf/common.sh@708 -- # format_key DHHC-1 9840ecd2a87542f34cbcb56b3e552a94 1 00:20:11.781 20:11:53 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:11.781 20:11:53 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:11.781 20:11:53 -- nvmf/common.sh@693 -- # key=9840ecd2a87542f34cbcb56b3e552a94 00:20:11.781 20:11:53 -- nvmf/common.sh@693 -- # digest=1 00:20:11.781 20:11:53 -- nvmf/common.sh@694 -- # python - 00:20:11.781 20:11:53 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.5WU 00:20:11.781 20:11:53 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.5WU 00:20:11.781 20:11:53 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.5WU 00:20:11.781 20:11:53 -- host/auth.sh@84 -- # gen_key sha384 48 00:20:11.781 20:11:53 -- host/auth.sh@53 -- # local digest len file key 00:20:11.781 20:11:53 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:11.781 20:11:53 -- host/auth.sh@54 -- # local -A digests 00:20:11.781 20:11:53 -- host/auth.sh@56 -- # digest=sha384 00:20:11.781 20:11:53 -- host/auth.sh@56 -- # len=48 00:20:11.781 20:11:53 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:11.781 20:11:53 -- host/auth.sh@57 -- # key=53f844777a0ec15594f7d44c981133f609842f58d66e843d 00:20:11.781 20:11:53 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:20:11.781 20:11:53 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.b0h 00:20:11.781 20:11:53 -- host/auth.sh@59 -- # format_dhchap_key 53f844777a0ec15594f7d44c981133f609842f58d66e843d 2 00:20:11.781 20:11:53 -- nvmf/common.sh@708 -- # format_key DHHC-1 53f844777a0ec15594f7d44c981133f609842f58d66e843d 2 00:20:11.781 20:11:53 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:11.781 20:11:53 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:11.781 20:11:53 -- nvmf/common.sh@693 -- # key=53f844777a0ec15594f7d44c981133f609842f58d66e843d 00:20:11.781 20:11:53 -- nvmf/common.sh@693 -- # digest=2 00:20:11.781 20:11:53 -- nvmf/common.sh@694 -- # python - 00:20:11.781 20:11:53 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.b0h 00:20:11.781 20:11:53 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.b0h 00:20:11.781 20:11:53 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.b0h 00:20:11.781 20:11:53 -- host/auth.sh@85 -- # gen_key sha512 64 00:20:11.781 20:11:53 -- host/auth.sh@53 -- # local digest len file key 00:20:11.781 20:11:53 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:11.781 20:11:53 -- host/auth.sh@54 -- # local -A digests 00:20:11.781 20:11:53 -- host/auth.sh@56 -- # digest=sha512 00:20:11.781 20:11:53 -- host/auth.sh@56 -- # len=64 00:20:11.781 20:11:53 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:11.781 20:11:53 -- host/auth.sh@57 -- # key=7360b57d19cabf119695310eaaf2579ebdfe66803b9dd93d896e291e692b99e4 00:20:11.781 20:11:53 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:20:11.781 20:11:53 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.xDO 00:20:11.781 20:11:53 -- host/auth.sh@59 -- # format_dhchap_key 7360b57d19cabf119695310eaaf2579ebdfe66803b9dd93d896e291e692b99e4 3 00:20:11.781 20:11:53 -- nvmf/common.sh@708 -- # format_key DHHC-1 7360b57d19cabf119695310eaaf2579ebdfe66803b9dd93d896e291e692b99e4 3 00:20:11.781 20:11:53 -- nvmf/common.sh@691 -- # local prefix key digest 00:20:11.781 20:11:53 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:20:11.781 20:11:53 -- nvmf/common.sh@693 -- # key=7360b57d19cabf119695310eaaf2579ebdfe66803b9dd93d896e291e692b99e4 00:20:11.781 20:11:53 -- nvmf/common.sh@693 -- # digest=3 00:20:11.781 20:11:53 -- nvmf/common.sh@694 -- # python - 00:20:11.781 20:11:54 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.xDO 00:20:11.781 20:11:54 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.xDO 00:20:11.781 20:11:54 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.xDO 00:20:11.781 20:11:54 -- host/auth.sh@87 -- # waitforlisten 74603 00:20:11.781 20:11:54 -- common/autotest_common.sh@817 -- # '[' -z 74603 ']' 00:20:11.781 20:11:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.781 20:11:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:11.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.781 20:11:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.781 20:11:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:11.781 20:11:54 -- common/autotest_common.sh@10 -- # set +x 00:20:12.040 20:11:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:12.040 20:11:54 -- common/autotest_common.sh@850 -- # return 0 00:20:12.040 20:11:54 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:12.040 20:11:54 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.0Ie 00:20:12.040 20:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.040 20:11:54 -- common/autotest_common.sh@10 -- # set +x 00:20:12.040 20:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.040 20:11:54 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:12.040 20:11:54 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.nBK 00:20:12.040 20:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.040 20:11:54 -- common/autotest_common.sh@10 -- # set +x 00:20:12.040 20:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.040 20:11:54 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:12.040 20:11:54 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.5WU 00:20:12.040 20:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.040 20:11:54 -- common/autotest_common.sh@10 -- # set +x 00:20:12.040 20:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.040 20:11:54 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:12.040 20:11:54 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.b0h 00:20:12.040 20:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.040 20:11:54 -- common/autotest_common.sh@10 -- # set +x 00:20:12.040 20:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.040 20:11:54 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:20:12.040 20:11:54 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.xDO 00:20:12.040 20:11:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:12.040 20:11:54 -- common/autotest_common.sh@10 -- # set +x 00:20:12.040 20:11:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:12.040 20:11:54 -- host/auth.sh@92 -- # nvmet_auth_init 00:20:12.040 20:11:54 -- host/auth.sh@35 -- # get_main_ns_ip 00:20:12.040 20:11:54 -- nvmf/common.sh@717 -- # local ip 00:20:12.040 20:11:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:12.040 20:11:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:12.040 20:11:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.040 20:11:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.040 20:11:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:12.040 20:11:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.040 20:11:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:12.040 20:11:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:12.040 20:11:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:12.040 20:11:54 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:12.040 20:11:54 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:12.040 20:11:54 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:20:12.040 20:11:54 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:12.040 20:11:54 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:12.040 20:11:54 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:12.040 20:11:54 -- nvmf/common.sh@628 -- # local block nvme 00:20:12.040 20:11:54 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:20:12.040 20:11:54 -- nvmf/common.sh@631 -- # modprobe nvmet 00:20:12.299 20:11:54 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:12.299 20:11:54 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:12.558 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:12.558 Waiting for block devices as requested 00:20:12.816 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:12.816 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:13.385 20:11:55 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:13.385 20:11:55 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:13.385 20:11:55 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:20:13.385 20:11:55 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:13.385 20:11:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:13.385 20:11:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:13.385 20:11:55 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:20:13.385 20:11:55 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:13.385 20:11:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:13.643 No valid GPT data, bailing 00:20:13.643 20:11:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:13.643 20:11:55 -- scripts/common.sh@391 -- # pt= 00:20:13.643 20:11:55 -- scripts/common.sh@392 -- # return 1 00:20:13.643 20:11:55 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:20:13.643 20:11:55 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:13.643 20:11:55 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:13.643 20:11:55 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:20:13.643 20:11:55 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:13.643 20:11:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:13.643 20:11:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:13.643 20:11:55 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:20:13.643 20:11:55 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:13.644 20:11:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:13.644 No valid GPT data, bailing 00:20:13.644 20:11:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:13.644 20:11:55 -- scripts/common.sh@391 -- # pt= 00:20:13.644 20:11:55 -- scripts/common.sh@392 -- # return 1 00:20:13.644 20:11:55 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:20:13.644 20:11:55 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:13.644 20:11:55 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:13.644 20:11:55 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:20:13.644 20:11:55 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:13.644 20:11:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:13.644 20:11:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:13.644 20:11:55 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:20:13.644 20:11:55 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:13.644 20:11:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:13.644 No valid GPT data, bailing 00:20:13.644 20:11:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:13.644 20:11:55 -- scripts/common.sh@391 -- # pt= 00:20:13.644 20:11:55 -- scripts/common.sh@392 -- # return 1 00:20:13.644 20:11:55 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:20:13.644 20:11:55 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:13.644 20:11:55 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:13.644 20:11:55 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:20:13.644 20:11:55 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:13.644 20:11:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:13.644 20:11:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:13.644 20:11:55 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:20:13.644 20:11:55 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:13.644 20:11:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:13.903 No valid GPT data, bailing 00:20:13.903 20:11:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:13.903 20:11:55 -- scripts/common.sh@391 -- # pt= 00:20:13.903 20:11:55 -- scripts/common.sh@392 -- # return 1 00:20:13.903 20:11:55 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:20:13.903 20:11:55 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:20:13.903 20:11:55 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:13.903 20:11:55 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:13.903 20:11:55 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:13.903 20:11:55 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:13.903 20:11:55 -- nvmf/common.sh@656 -- # echo 1 00:20:13.903 20:11:55 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:20:13.903 20:11:55 -- nvmf/common.sh@658 -- # echo 1 00:20:13.904 20:11:55 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:20:13.904 20:11:55 -- nvmf/common.sh@661 -- # echo tcp 00:20:13.904 20:11:55 -- nvmf/common.sh@662 -- # echo 4420 00:20:13.904 20:11:55 -- nvmf/common.sh@663 -- # echo ipv4 00:20:13.904 20:11:55 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:13.904 20:11:55 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf --hostid=19152f61-83a6-4d7e-88f6-d601ac0cc1cf -a 10.0.0.1 -t tcp -s 4420 00:20:13.904 00:20:13.904 Discovery Log Number of Records 2, Generation counter 2 00:20:13.904 =====Discovery Log Entry 0====== 00:20:13.904 trtype: tcp 00:20:13.904 adrfam: ipv4 00:20:13.904 subtype: current discovery subsystem 00:20:13.904 treq: not specified, sq flow control disable supported 00:20:13.904 portid: 1 00:20:13.904 trsvcid: 4420 00:20:13.904 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:13.904 traddr: 10.0.0.1 00:20:13.904 eflags: none 00:20:13.904 sectype: none 00:20:13.904 =====Discovery Log Entry 1====== 00:20:13.904 trtype: tcp 00:20:13.904 adrfam: ipv4 00:20:13.904 subtype: nvme subsystem 00:20:13.904 treq: not specified, sq flow control disable supported 00:20:13.904 portid: 1 00:20:13.904 trsvcid: 4420 00:20:13.904 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:13.904 traddr: 10.0.0.1 00:20:13.904 eflags: none 00:20:13.904 sectype: none 00:20:13.904 20:11:55 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:13.904 20:11:55 -- host/auth.sh@37 -- # echo 0 00:20:13.904 20:11:55 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:13.904 20:11:55 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:13.904 20:11:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:13.904 20:11:55 -- host/auth.sh@44 -- # digest=sha256 00:20:13.904 20:11:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:13.904 20:11:55 -- host/auth.sh@44 -- # keyid=1 00:20:13.904 20:11:55 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:13.904 20:11:55 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:13.904 20:11:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:13.904 20:11:56 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:13.904 20:11:56 -- host/auth.sh@100 -- # IFS=, 00:20:13.904 20:11:56 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:20:13.904 20:11:56 -- host/auth.sh@100 -- # IFS=, 00:20:13.904 20:11:56 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:13.904 20:11:56 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:13.904 20:11:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:13.904 20:11:56 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:20:13.904 20:11:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:13.904 20:11:56 -- host/auth.sh@68 -- # keyid=1 00:20:13.904 20:11:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:13.904 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.904 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:13.904 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.904 20:11:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:13.904 20:11:56 -- nvmf/common.sh@717 -- # local ip 00:20:13.904 20:11:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:13.904 20:11:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:13.904 20:11:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.904 20:11:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.904 20:11:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:13.904 20:11:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.904 20:11:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:13.904 20:11:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:13.904 20:11:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:13.904 20:11:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:13.904 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.904 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.163 nvme0n1 00:20:14.163 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.163 20:11:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:14.163 20:11:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.163 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.163 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.163 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.163 20:11:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.163 20:11:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.163 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.163 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.163 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.163 20:11:56 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:20:14.163 20:11:56 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.163 20:11:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:14.163 20:11:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:14.163 20:11:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:14.163 20:11:56 -- host/auth.sh@44 -- # digest=sha256 00:20:14.163 20:11:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.163 20:11:56 -- host/auth.sh@44 -- # keyid=0 00:20:14.163 20:11:56 -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:14.163 20:11:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:14.163 20:11:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:14.163 20:11:56 -- host/auth.sh@49 -- # echo DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:14.163 20:11:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:20:14.163 20:11:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:14.163 20:11:56 -- host/auth.sh@68 -- # digest=sha256 00:20:14.163 20:11:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:14.163 20:11:56 -- host/auth.sh@68 -- # keyid=0 00:20:14.163 20:11:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.163 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.163 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.163 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.163 20:11:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:14.163 20:11:56 -- nvmf/common.sh@717 -- # local ip 00:20:14.163 20:11:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:14.163 20:11:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:14.163 20:11:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.163 20:11:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.163 20:11:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:14.163 20:11:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.163 20:11:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:14.163 20:11:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:14.163 20:11:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:14.164 20:11:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:14.164 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.164 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.164 nvme0n1 00:20:14.164 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.164 20:11:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.164 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.164 20:11:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:14.164 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.423 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.423 20:11:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.423 20:11:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.423 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.423 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.423 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.423 20:11:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:14.423 20:11:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:14.423 20:11:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:14.423 20:11:56 -- host/auth.sh@44 -- # digest=sha256 00:20:14.423 20:11:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.423 20:11:56 -- host/auth.sh@44 -- # keyid=1 00:20:14.423 20:11:56 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:14.423 20:11:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:14.423 20:11:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:14.423 20:11:56 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:14.423 20:11:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:20:14.423 20:11:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:14.423 20:11:56 -- host/auth.sh@68 -- # digest=sha256 00:20:14.423 20:11:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:14.423 20:11:56 -- host/auth.sh@68 -- # keyid=1 00:20:14.423 20:11:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.423 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.423 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.423 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.423 20:11:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:14.423 20:11:56 -- nvmf/common.sh@717 -- # local ip 00:20:14.423 20:11:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:14.423 20:11:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:14.423 20:11:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.423 20:11:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.423 20:11:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:14.423 20:11:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.423 20:11:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:14.423 20:11:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:14.423 20:11:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:14.423 20:11:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:14.423 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.423 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.423 nvme0n1 00:20:14.423 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.423 20:11:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.423 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.423 20:11:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:14.423 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.423 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.423 20:11:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.423 20:11:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.423 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.423 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.423 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.423 20:11:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:14.423 20:11:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:14.423 20:11:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:14.423 20:11:56 -- host/auth.sh@44 -- # digest=sha256 00:20:14.423 20:11:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.423 20:11:56 -- host/auth.sh@44 -- # keyid=2 00:20:14.423 20:11:56 -- host/auth.sh@45 -- # key=DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:14.423 20:11:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:14.423 20:11:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:14.423 20:11:56 -- host/auth.sh@49 -- # echo DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:14.423 20:11:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:20:14.423 20:11:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:14.423 20:11:56 -- host/auth.sh@68 -- # digest=sha256 00:20:14.423 20:11:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:14.423 20:11:56 -- host/auth.sh@68 -- # keyid=2 00:20:14.423 20:11:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.423 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.423 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.423 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.423 20:11:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:14.423 20:11:56 -- nvmf/common.sh@717 -- # local ip 00:20:14.423 20:11:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:14.423 20:11:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:14.423 20:11:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.423 20:11:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.423 20:11:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:14.423 20:11:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.423 20:11:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:14.423 20:11:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:14.423 20:11:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:14.423 20:11:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:14.423 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.423 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.683 nvme0n1 00:20:14.683 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.683 20:11:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.683 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.683 20:11:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:14.683 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.683 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.683 20:11:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.683 20:11:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.683 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.683 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.683 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.683 20:11:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:14.683 20:11:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:14.683 20:11:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:14.683 20:11:56 -- host/auth.sh@44 -- # digest=sha256 00:20:14.683 20:11:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.683 20:11:56 -- host/auth.sh@44 -- # keyid=3 00:20:14.683 20:11:56 -- host/auth.sh@45 -- # key=DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:14.683 20:11:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:14.683 20:11:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:14.683 20:11:56 -- host/auth.sh@49 -- # echo DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:14.683 20:11:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:20:14.683 20:11:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:14.683 20:11:56 -- host/auth.sh@68 -- # digest=sha256 00:20:14.683 20:11:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:14.683 20:11:56 -- host/auth.sh@68 -- # keyid=3 00:20:14.683 20:11:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.683 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.683 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.683 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.683 20:11:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:14.683 20:11:56 -- nvmf/common.sh@717 -- # local ip 00:20:14.683 20:11:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:14.683 20:11:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:14.683 20:11:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.683 20:11:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.683 20:11:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:14.683 20:11:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.683 20:11:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:14.683 20:11:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:14.683 20:11:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:14.683 20:11:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:14.683 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.683 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.943 nvme0n1 00:20:14.944 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.944 20:11:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:14.944 20:11:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.944 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.944 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.944 20:11:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.944 20:11:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.944 20:11:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.944 20:11:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.944 20:11:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.944 20:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.944 20:11:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:14.944 20:11:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:14.944 20:11:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:14.944 20:11:57 -- host/auth.sh@44 -- # digest=sha256 00:20:14.944 20:11:57 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.944 20:11:57 -- host/auth.sh@44 -- # keyid=4 00:20:14.944 20:11:57 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:14.944 20:11:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:14.944 20:11:57 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:14.944 20:11:57 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:14.944 20:11:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:20:14.944 20:11:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:14.944 20:11:57 -- host/auth.sh@68 -- # digest=sha256 00:20:14.944 20:11:57 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:14.944 20:11:57 -- host/auth.sh@68 -- # keyid=4 00:20:14.944 20:11:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.944 20:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.944 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:20:14.944 20:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.944 20:11:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:14.944 20:11:57 -- nvmf/common.sh@717 -- # local ip 00:20:14.944 20:11:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:14.944 20:11:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:14.944 20:11:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.944 20:11:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.944 20:11:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:14.944 20:11:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.944 20:11:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:14.944 20:11:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:14.944 20:11:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:14.944 20:11:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:14.944 20:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.944 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:20:14.944 nvme0n1 00:20:14.944 20:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.944 20:11:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.944 20:11:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:14.944 20:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.944 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:20:14.944 20:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.944 20:11:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.944 20:11:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:14.944 20:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.944 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:20:15.204 20:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.204 20:11:57 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.204 20:11:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:15.204 20:11:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:15.204 20:11:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:15.204 20:11:57 -- host/auth.sh@44 -- # digest=sha256 00:20:15.204 20:11:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:15.204 20:11:57 -- host/auth.sh@44 -- # keyid=0 00:20:15.204 20:11:57 -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:15.204 20:11:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:15.204 20:11:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:15.204 20:11:57 -- host/auth.sh@49 -- # echo DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:15.204 20:11:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:20:15.204 20:11:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:15.204 20:11:57 -- host/auth.sh@68 -- # digest=sha256 00:20:15.204 20:11:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:15.204 20:11:57 -- host/auth.sh@68 -- # keyid=0 00:20:15.204 20:11:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:15.204 20:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.204 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:20:15.204 20:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.204 20:11:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:15.204 20:11:57 -- nvmf/common.sh@717 -- # local ip 00:20:15.204 20:11:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:15.204 20:11:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:15.204 20:11:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.204 20:11:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.204 20:11:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:15.204 20:11:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.204 20:11:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:15.204 20:11:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:15.204 20:11:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:15.204 20:11:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:15.204 20:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.204 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:20:15.463 nvme0n1 00:20:15.463 20:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.463 20:11:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.463 20:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.463 20:11:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:15.463 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:20:15.463 20:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.463 20:11:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.463 20:11:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.463 20:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.463 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:20:15.463 20:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.463 20:11:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:15.463 20:11:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:15.463 20:11:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:15.463 20:11:57 -- host/auth.sh@44 -- # digest=sha256 00:20:15.463 20:11:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:15.463 20:11:57 -- host/auth.sh@44 -- # keyid=1 00:20:15.463 20:11:57 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:15.463 20:11:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:15.463 20:11:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:15.463 20:11:57 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:15.463 20:11:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:20:15.463 20:11:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:15.463 20:11:57 -- host/auth.sh@68 -- # digest=sha256 00:20:15.463 20:11:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:15.463 20:11:57 -- host/auth.sh@68 -- # keyid=1 00:20:15.463 20:11:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:15.463 20:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.463 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:20:15.463 20:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.463 20:11:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:15.463 20:11:57 -- nvmf/common.sh@717 -- # local ip 00:20:15.463 20:11:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:15.463 20:11:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:15.463 20:11:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.463 20:11:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.463 20:11:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:15.463 20:11:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.463 20:11:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:15.463 20:11:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:15.463 20:11:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:15.463 20:11:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:15.463 20:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.463 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:20:15.722 nvme0n1 00:20:15.723 20:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.723 20:11:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.723 20:11:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:15.723 20:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.723 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:20:15.723 20:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.723 20:11:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.723 20:11:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.723 20:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.723 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:20:15.723 20:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.723 20:11:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:15.723 20:11:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:15.723 20:11:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:15.723 20:11:57 -- host/auth.sh@44 -- # digest=sha256 00:20:15.723 20:11:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:15.723 20:11:57 -- host/auth.sh@44 -- # keyid=2 00:20:15.723 20:11:57 -- host/auth.sh@45 -- # key=DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:15.723 20:11:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:15.723 20:11:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:15.723 20:11:57 -- host/auth.sh@49 -- # echo DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:15.723 20:11:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:20:15.723 20:11:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:15.723 20:11:57 -- host/auth.sh@68 -- # digest=sha256 00:20:15.723 20:11:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:15.723 20:11:57 -- host/auth.sh@68 -- # keyid=2 00:20:15.723 20:11:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:15.723 20:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.723 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:20:15.723 20:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.723 20:11:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:15.723 20:11:57 -- nvmf/common.sh@717 -- # local ip 00:20:15.723 20:11:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:15.723 20:11:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:15.723 20:11:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.723 20:11:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.723 20:11:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:15.723 20:11:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.723 20:11:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:15.723 20:11:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:15.723 20:11:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:15.723 20:11:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:15.723 20:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.723 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:20:15.723 nvme0n1 00:20:15.723 20:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.723 20:11:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.723 20:11:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:15.723 20:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.723 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:20:15.982 20:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.982 20:11:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.982 20:11:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.982 20:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.982 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:20:15.982 20:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.982 20:11:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:15.982 20:11:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:15.982 20:11:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:15.982 20:11:58 -- host/auth.sh@44 -- # digest=sha256 00:20:15.982 20:11:58 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:15.982 20:11:58 -- host/auth.sh@44 -- # keyid=3 00:20:15.982 20:11:58 -- host/auth.sh@45 -- # key=DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:15.982 20:11:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:15.982 20:11:58 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:15.982 20:11:58 -- host/auth.sh@49 -- # echo DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:15.982 20:11:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:20:15.982 20:11:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:15.982 20:11:58 -- host/auth.sh@68 -- # digest=sha256 00:20:15.982 20:11:58 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:15.982 20:11:58 -- host/auth.sh@68 -- # keyid=3 00:20:15.982 20:11:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:15.982 20:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.982 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:20:15.982 20:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.982 20:11:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:15.982 20:11:58 -- nvmf/common.sh@717 -- # local ip 00:20:15.982 20:11:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:15.982 20:11:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:15.982 20:11:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:15.982 20:11:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:15.982 20:11:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:15.982 20:11:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:15.982 20:11:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:15.982 20:11:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:15.982 20:11:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:15.982 20:11:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:15.982 20:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.982 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:20:15.982 nvme0n1 00:20:15.982 20:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.982 20:11:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:15.982 20:11:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.982 20:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.982 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:20:15.982 20:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.982 20:11:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.982 20:11:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:15.982 20:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.982 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:20:15.982 20:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.982 20:11:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:15.982 20:11:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:15.982 20:11:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:15.982 20:11:58 -- host/auth.sh@44 -- # digest=sha256 00:20:15.982 20:11:58 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:15.982 20:11:58 -- host/auth.sh@44 -- # keyid=4 00:20:15.982 20:11:58 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:15.982 20:11:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:15.982 20:11:58 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:15.982 20:11:58 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:15.982 20:11:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:20:15.982 20:11:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:15.982 20:11:58 -- host/auth.sh@68 -- # digest=sha256 00:20:15.982 20:11:58 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:15.982 20:11:58 -- host/auth.sh@68 -- # keyid=4 00:20:15.982 20:11:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:15.982 20:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.982 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:20:15.982 20:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.242 20:11:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:16.242 20:11:58 -- nvmf/common.sh@717 -- # local ip 00:20:16.242 20:11:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:16.242 20:11:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:16.242 20:11:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.242 20:11:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.242 20:11:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:16.242 20:11:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.242 20:11:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:16.242 20:11:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:16.242 20:11:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:16.242 20:11:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:16.242 20:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.242 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:20:16.242 nvme0n1 00:20:16.242 20:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.242 20:11:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.242 20:11:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:16.242 20:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.242 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:20:16.242 20:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.242 20:11:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.242 20:11:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:16.242 20:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.242 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:20:16.242 20:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.242 20:11:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:16.242 20:11:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:16.242 20:11:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:16.242 20:11:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:16.242 20:11:58 -- host/auth.sh@44 -- # digest=sha256 00:20:16.242 20:11:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:16.242 20:11:58 -- host/auth.sh@44 -- # keyid=0 00:20:16.242 20:11:58 -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:16.242 20:11:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:16.242 20:11:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:16.811 20:11:58 -- host/auth.sh@49 -- # echo DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:16.811 20:11:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:20:16.811 20:11:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:16.811 20:11:58 -- host/auth.sh@68 -- # digest=sha256 00:20:16.811 20:11:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:16.811 20:11:58 -- host/auth.sh@68 -- # keyid=0 00:20:16.811 20:11:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:16.811 20:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.811 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:20:16.811 20:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.811 20:11:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:16.811 20:11:58 -- nvmf/common.sh@717 -- # local ip 00:20:16.811 20:11:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:16.811 20:11:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:16.811 20:11:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.811 20:11:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.811 20:11:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:16.811 20:11:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.811 20:11:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:16.811 20:11:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:16.811 20:11:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:16.811 20:11:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:16.811 20:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:16.811 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:20:17.071 nvme0n1 00:20:17.071 20:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.071 20:11:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.071 20:11:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:17.071 20:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.071 20:11:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.071 20:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.071 20:11:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.071 20:11:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.071 20:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.071 20:11:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.071 20:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.071 20:11:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:17.071 20:11:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:17.071 20:11:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:17.071 20:11:59 -- host/auth.sh@44 -- # digest=sha256 00:20:17.071 20:11:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:17.071 20:11:59 -- host/auth.sh@44 -- # keyid=1 00:20:17.071 20:11:59 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:17.071 20:11:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:17.071 20:11:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:17.071 20:11:59 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:17.072 20:11:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:20:17.072 20:11:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:17.072 20:11:59 -- host/auth.sh@68 -- # digest=sha256 00:20:17.072 20:11:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:17.072 20:11:59 -- host/auth.sh@68 -- # keyid=1 00:20:17.072 20:11:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.072 20:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.072 20:11:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.072 20:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.072 20:11:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:17.072 20:11:59 -- nvmf/common.sh@717 -- # local ip 00:20:17.072 20:11:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:17.072 20:11:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:17.072 20:11:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.072 20:11:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.072 20:11:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:17.072 20:11:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.072 20:11:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:17.072 20:11:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:17.072 20:11:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:17.072 20:11:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:17.072 20:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.072 20:11:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.331 nvme0n1 00:20:17.331 20:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.331 20:11:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.331 20:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.331 20:11:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.331 20:11:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:17.331 20:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.331 20:11:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.331 20:11:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.331 20:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.331 20:11:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.331 20:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.331 20:11:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:17.331 20:11:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:17.331 20:11:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:17.331 20:11:59 -- host/auth.sh@44 -- # digest=sha256 00:20:17.331 20:11:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:17.331 20:11:59 -- host/auth.sh@44 -- # keyid=2 00:20:17.331 20:11:59 -- host/auth.sh@45 -- # key=DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:17.331 20:11:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:17.331 20:11:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:17.331 20:11:59 -- host/auth.sh@49 -- # echo DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:17.331 20:11:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:20:17.331 20:11:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:17.331 20:11:59 -- host/auth.sh@68 -- # digest=sha256 00:20:17.331 20:11:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:17.331 20:11:59 -- host/auth.sh@68 -- # keyid=2 00:20:17.331 20:11:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.331 20:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.331 20:11:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.331 20:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.331 20:11:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:17.331 20:11:59 -- nvmf/common.sh@717 -- # local ip 00:20:17.331 20:11:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:17.331 20:11:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:17.331 20:11:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.331 20:11:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.331 20:11:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:17.331 20:11:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.331 20:11:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:17.331 20:11:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:17.332 20:11:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:17.332 20:11:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:17.332 20:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.332 20:11:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.590 nvme0n1 00:20:17.590 20:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.591 20:11:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.591 20:11:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:17.591 20:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.591 20:11:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.591 20:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.591 20:11:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.591 20:11:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.591 20:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.591 20:11:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.591 20:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.591 20:11:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:17.591 20:11:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:17.591 20:11:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:17.591 20:11:59 -- host/auth.sh@44 -- # digest=sha256 00:20:17.591 20:11:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:17.591 20:11:59 -- host/auth.sh@44 -- # keyid=3 00:20:17.591 20:11:59 -- host/auth.sh@45 -- # key=DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:17.591 20:11:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:17.591 20:11:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:17.591 20:11:59 -- host/auth.sh@49 -- # echo DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:17.591 20:11:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:20:17.591 20:11:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:17.591 20:11:59 -- host/auth.sh@68 -- # digest=sha256 00:20:17.591 20:11:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:17.591 20:11:59 -- host/auth.sh@68 -- # keyid=3 00:20:17.591 20:11:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.591 20:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.591 20:11:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.591 20:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.591 20:11:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:17.591 20:11:59 -- nvmf/common.sh@717 -- # local ip 00:20:17.591 20:11:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:17.591 20:11:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:17.591 20:11:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.591 20:11:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.591 20:11:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:17.591 20:11:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.591 20:11:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:17.591 20:11:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:17.591 20:11:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:17.591 20:11:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:17.591 20:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.591 20:11:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.850 nvme0n1 00:20:17.850 20:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.850 20:11:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.850 20:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.850 20:11:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.850 20:11:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:17.850 20:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.850 20:11:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.850 20:11:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.851 20:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.851 20:11:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.851 20:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.851 20:11:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:17.851 20:11:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:17.851 20:11:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:17.851 20:11:59 -- host/auth.sh@44 -- # digest=sha256 00:20:17.851 20:11:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:17.851 20:11:59 -- host/auth.sh@44 -- # keyid=4 00:20:17.851 20:11:59 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:17.851 20:11:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:17.851 20:11:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:17.851 20:11:59 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:17.851 20:11:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:20:17.851 20:11:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:17.851 20:11:59 -- host/auth.sh@68 -- # digest=sha256 00:20:17.851 20:11:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:17.851 20:11:59 -- host/auth.sh@68 -- # keyid=4 00:20:17.851 20:11:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.851 20:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.851 20:11:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.851 20:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.851 20:11:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:17.851 20:11:59 -- nvmf/common.sh@717 -- # local ip 00:20:17.851 20:11:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:17.851 20:11:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:17.851 20:11:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.851 20:11:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.851 20:11:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:17.851 20:11:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.851 20:11:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:17.851 20:11:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:17.851 20:11:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:17.851 20:11:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:17.851 20:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.851 20:11:59 -- common/autotest_common.sh@10 -- # set +x 00:20:17.851 nvme0n1 00:20:17.851 20:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.851 20:12:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.851 20:12:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:17.851 20:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.851 20:12:00 -- common/autotest_common.sh@10 -- # set +x 00:20:18.115 20:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.115 20:12:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.115 20:12:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.115 20:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.115 20:12:00 -- common/autotest_common.sh@10 -- # set +x 00:20:18.116 20:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.116 20:12:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.116 20:12:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:18.116 20:12:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:18.116 20:12:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:18.116 20:12:00 -- host/auth.sh@44 -- # digest=sha256 00:20:18.116 20:12:00 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:18.116 20:12:00 -- host/auth.sh@44 -- # keyid=0 00:20:18.116 20:12:00 -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:18.116 20:12:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:18.116 20:12:00 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:19.501 20:12:01 -- host/auth.sh@49 -- # echo DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:19.501 20:12:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:20:19.501 20:12:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:19.501 20:12:01 -- host/auth.sh@68 -- # digest=sha256 00:20:19.501 20:12:01 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:19.501 20:12:01 -- host/auth.sh@68 -- # keyid=0 00:20:19.501 20:12:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.501 20:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.501 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:20:19.501 20:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.501 20:12:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:19.501 20:12:01 -- nvmf/common.sh@717 -- # local ip 00:20:19.501 20:12:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:19.501 20:12:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:19.501 20:12:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.501 20:12:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.501 20:12:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:19.501 20:12:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.501 20:12:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:19.501 20:12:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:19.501 20:12:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:19.501 20:12:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:19.501 20:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.501 20:12:01 -- common/autotest_common.sh@10 -- # set +x 00:20:19.761 nvme0n1 00:20:19.761 20:12:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:19.761 20:12:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.761 20:12:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:19.761 20:12:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:19.761 20:12:02 -- common/autotest_common.sh@10 -- # set +x 00:20:20.021 20:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.021 20:12:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.021 20:12:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.021 20:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.021 20:12:02 -- common/autotest_common.sh@10 -- # set +x 00:20:20.021 20:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.021 20:12:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:20.021 20:12:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:20.021 20:12:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:20.021 20:12:02 -- host/auth.sh@44 -- # digest=sha256 00:20:20.021 20:12:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:20.021 20:12:02 -- host/auth.sh@44 -- # keyid=1 00:20:20.021 20:12:02 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:20.021 20:12:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:20.021 20:12:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:20.021 20:12:02 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:20.021 20:12:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:20:20.021 20:12:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:20.021 20:12:02 -- host/auth.sh@68 -- # digest=sha256 00:20:20.021 20:12:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:20.021 20:12:02 -- host/auth.sh@68 -- # keyid=1 00:20:20.021 20:12:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:20.021 20:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.021 20:12:02 -- common/autotest_common.sh@10 -- # set +x 00:20:20.021 20:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.021 20:12:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:20.021 20:12:02 -- nvmf/common.sh@717 -- # local ip 00:20:20.021 20:12:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:20.021 20:12:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:20.021 20:12:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.021 20:12:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.021 20:12:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:20.021 20:12:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.021 20:12:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:20.021 20:12:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:20.021 20:12:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:20.021 20:12:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:20.021 20:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.021 20:12:02 -- common/autotest_common.sh@10 -- # set +x 00:20:20.281 nvme0n1 00:20:20.281 20:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.281 20:12:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.281 20:12:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:20.281 20:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.281 20:12:02 -- common/autotest_common.sh@10 -- # set +x 00:20:20.281 20:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.281 20:12:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.281 20:12:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.281 20:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.281 20:12:02 -- common/autotest_common.sh@10 -- # set +x 00:20:20.281 20:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.281 20:12:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:20.281 20:12:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:20.281 20:12:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:20.281 20:12:02 -- host/auth.sh@44 -- # digest=sha256 00:20:20.281 20:12:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:20.281 20:12:02 -- host/auth.sh@44 -- # keyid=2 00:20:20.281 20:12:02 -- host/auth.sh@45 -- # key=DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:20.281 20:12:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:20.281 20:12:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:20.281 20:12:02 -- host/auth.sh@49 -- # echo DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:20.281 20:12:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:20:20.281 20:12:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:20.281 20:12:02 -- host/auth.sh@68 -- # digest=sha256 00:20:20.281 20:12:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:20.281 20:12:02 -- host/auth.sh@68 -- # keyid=2 00:20:20.281 20:12:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:20.281 20:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.281 20:12:02 -- common/autotest_common.sh@10 -- # set +x 00:20:20.281 20:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.281 20:12:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:20.281 20:12:02 -- nvmf/common.sh@717 -- # local ip 00:20:20.281 20:12:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:20.281 20:12:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:20.281 20:12:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.281 20:12:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.281 20:12:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:20.281 20:12:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.281 20:12:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:20.281 20:12:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:20.281 20:12:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:20.281 20:12:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:20.281 20:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.281 20:12:02 -- common/autotest_common.sh@10 -- # set +x 00:20:20.541 nvme0n1 00:20:20.541 20:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.541 20:12:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.541 20:12:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:20.541 20:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.541 20:12:02 -- common/autotest_common.sh@10 -- # set +x 00:20:20.541 20:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.800 20:12:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.800 20:12:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.800 20:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.800 20:12:02 -- common/autotest_common.sh@10 -- # set +x 00:20:20.800 20:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.800 20:12:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:20.800 20:12:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:20.800 20:12:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:20.800 20:12:02 -- host/auth.sh@44 -- # digest=sha256 00:20:20.800 20:12:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:20.800 20:12:02 -- host/auth.sh@44 -- # keyid=3 00:20:20.800 20:12:02 -- host/auth.sh@45 -- # key=DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:20.800 20:12:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:20.800 20:12:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:20.800 20:12:02 -- host/auth.sh@49 -- # echo DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:20.800 20:12:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:20:20.800 20:12:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:20.800 20:12:02 -- host/auth.sh@68 -- # digest=sha256 00:20:20.800 20:12:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:20.800 20:12:02 -- host/auth.sh@68 -- # keyid=3 00:20:20.800 20:12:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:20.800 20:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.800 20:12:02 -- common/autotest_common.sh@10 -- # set +x 00:20:20.800 20:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.800 20:12:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:20.800 20:12:02 -- nvmf/common.sh@717 -- # local ip 00:20:20.800 20:12:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:20.800 20:12:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:20.800 20:12:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.800 20:12:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.800 20:12:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:20.800 20:12:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.800 20:12:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:20.800 20:12:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:20.800 20:12:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:20.800 20:12:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:20.800 20:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.800 20:12:02 -- common/autotest_common.sh@10 -- # set +x 00:20:21.059 nvme0n1 00:20:21.059 20:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.059 20:12:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.059 20:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.059 20:12:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:21.059 20:12:03 -- common/autotest_common.sh@10 -- # set +x 00:20:21.059 20:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.059 20:12:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.059 20:12:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.059 20:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.059 20:12:03 -- common/autotest_common.sh@10 -- # set +x 00:20:21.059 20:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.059 20:12:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:21.059 20:12:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:21.059 20:12:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:21.059 20:12:03 -- host/auth.sh@44 -- # digest=sha256 00:20:21.059 20:12:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:21.059 20:12:03 -- host/auth.sh@44 -- # keyid=4 00:20:21.059 20:12:03 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:21.059 20:12:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:21.059 20:12:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:21.060 20:12:03 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:21.060 20:12:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:20:21.060 20:12:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:21.060 20:12:03 -- host/auth.sh@68 -- # digest=sha256 00:20:21.060 20:12:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:21.060 20:12:03 -- host/auth.sh@68 -- # keyid=4 00:20:21.060 20:12:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:21.060 20:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.060 20:12:03 -- common/autotest_common.sh@10 -- # set +x 00:20:21.060 20:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.060 20:12:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:21.060 20:12:03 -- nvmf/common.sh@717 -- # local ip 00:20:21.060 20:12:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:21.060 20:12:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:21.060 20:12:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.060 20:12:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.060 20:12:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:21.060 20:12:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.060 20:12:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:21.060 20:12:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:21.060 20:12:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:21.060 20:12:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:21.060 20:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.060 20:12:03 -- common/autotest_common.sh@10 -- # set +x 00:20:21.320 nvme0n1 00:20:21.320 20:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.320 20:12:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.320 20:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.320 20:12:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:21.320 20:12:03 -- common/autotest_common.sh@10 -- # set +x 00:20:21.320 20:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.579 20:12:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.579 20:12:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.579 20:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.579 20:12:03 -- common/autotest_common.sh@10 -- # set +x 00:20:21.579 20:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.579 20:12:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.579 20:12:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:21.579 20:12:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:21.579 20:12:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:21.579 20:12:03 -- host/auth.sh@44 -- # digest=sha256 00:20:21.579 20:12:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:21.579 20:12:03 -- host/auth.sh@44 -- # keyid=0 00:20:21.579 20:12:03 -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:21.579 20:12:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:21.579 20:12:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:24.870 20:12:06 -- host/auth.sh@49 -- # echo DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:24.870 20:12:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:20:24.870 20:12:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:24.870 20:12:06 -- host/auth.sh@68 -- # digest=sha256 00:20:24.870 20:12:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:24.870 20:12:06 -- host/auth.sh@68 -- # keyid=0 00:20:24.870 20:12:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.870 20:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.870 20:12:06 -- common/autotest_common.sh@10 -- # set +x 00:20:24.870 20:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:24.870 20:12:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:24.870 20:12:06 -- nvmf/common.sh@717 -- # local ip 00:20:24.870 20:12:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:24.870 20:12:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:24.870 20:12:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.870 20:12:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.870 20:12:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:24.870 20:12:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.870 20:12:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:24.870 20:12:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:24.870 20:12:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:24.870 20:12:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:24.870 20:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:24.870 20:12:06 -- common/autotest_common.sh@10 -- # set +x 00:20:25.129 nvme0n1 00:20:25.129 20:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.129 20:12:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.129 20:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.129 20:12:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:25.129 20:12:07 -- common/autotest_common.sh@10 -- # set +x 00:20:25.129 20:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.130 20:12:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.130 20:12:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.130 20:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.130 20:12:07 -- common/autotest_common.sh@10 -- # set +x 00:20:25.130 20:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.130 20:12:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:25.130 20:12:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:25.130 20:12:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:25.130 20:12:07 -- host/auth.sh@44 -- # digest=sha256 00:20:25.130 20:12:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:25.130 20:12:07 -- host/auth.sh@44 -- # keyid=1 00:20:25.130 20:12:07 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:25.130 20:12:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:25.130 20:12:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:25.130 20:12:07 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:25.130 20:12:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:20:25.130 20:12:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:25.130 20:12:07 -- host/auth.sh@68 -- # digest=sha256 00:20:25.130 20:12:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:25.130 20:12:07 -- host/auth.sh@68 -- # keyid=1 00:20:25.130 20:12:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.130 20:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.130 20:12:07 -- common/autotest_common.sh@10 -- # set +x 00:20:25.130 20:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.130 20:12:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:25.130 20:12:07 -- nvmf/common.sh@717 -- # local ip 00:20:25.130 20:12:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:25.130 20:12:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:25.130 20:12:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.130 20:12:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.130 20:12:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:25.130 20:12:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.130 20:12:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:25.130 20:12:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:25.130 20:12:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:25.130 20:12:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:25.130 20:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.130 20:12:07 -- common/autotest_common.sh@10 -- # set +x 00:20:25.700 nvme0n1 00:20:25.700 20:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.700 20:12:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.700 20:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.700 20:12:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:25.700 20:12:07 -- common/autotest_common.sh@10 -- # set +x 00:20:25.700 20:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.700 20:12:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.700 20:12:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.700 20:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.700 20:12:07 -- common/autotest_common.sh@10 -- # set +x 00:20:25.700 20:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.700 20:12:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:25.700 20:12:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:25.700 20:12:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:25.700 20:12:07 -- host/auth.sh@44 -- # digest=sha256 00:20:25.700 20:12:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:25.700 20:12:07 -- host/auth.sh@44 -- # keyid=2 00:20:25.700 20:12:07 -- host/auth.sh@45 -- # key=DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:25.700 20:12:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:25.700 20:12:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:25.700 20:12:07 -- host/auth.sh@49 -- # echo DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:25.700 20:12:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:20:25.700 20:12:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:25.700 20:12:07 -- host/auth.sh@68 -- # digest=sha256 00:20:25.700 20:12:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:25.700 20:12:07 -- host/auth.sh@68 -- # keyid=2 00:20:25.700 20:12:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.700 20:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.700 20:12:07 -- common/autotest_common.sh@10 -- # set +x 00:20:25.700 20:12:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:25.700 20:12:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:25.700 20:12:07 -- nvmf/common.sh@717 -- # local ip 00:20:25.700 20:12:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:25.700 20:12:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:25.700 20:12:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.700 20:12:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.700 20:12:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:25.700 20:12:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.700 20:12:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:25.700 20:12:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:25.700 20:12:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:25.700 20:12:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:25.700 20:12:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:25.700 20:12:07 -- common/autotest_common.sh@10 -- # set +x 00:20:26.270 nvme0n1 00:20:26.270 20:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.270 20:12:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.270 20:12:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:26.270 20:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.270 20:12:08 -- common/autotest_common.sh@10 -- # set +x 00:20:26.270 20:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.270 20:12:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.270 20:12:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.270 20:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.270 20:12:08 -- common/autotest_common.sh@10 -- # set +x 00:20:26.270 20:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.270 20:12:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:26.270 20:12:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:26.270 20:12:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:26.270 20:12:08 -- host/auth.sh@44 -- # digest=sha256 00:20:26.270 20:12:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:26.270 20:12:08 -- host/auth.sh@44 -- # keyid=3 00:20:26.270 20:12:08 -- host/auth.sh@45 -- # key=DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:26.270 20:12:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:26.270 20:12:08 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:26.270 20:12:08 -- host/auth.sh@49 -- # echo DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:26.270 20:12:08 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:20:26.270 20:12:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:26.270 20:12:08 -- host/auth.sh@68 -- # digest=sha256 00:20:26.270 20:12:08 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:26.270 20:12:08 -- host/auth.sh@68 -- # keyid=3 00:20:26.270 20:12:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.270 20:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.270 20:12:08 -- common/autotest_common.sh@10 -- # set +x 00:20:26.270 20:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.270 20:12:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:26.270 20:12:08 -- nvmf/common.sh@717 -- # local ip 00:20:26.270 20:12:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:26.270 20:12:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:26.270 20:12:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.270 20:12:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.270 20:12:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:26.270 20:12:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.270 20:12:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:26.270 20:12:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:26.270 20:12:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:26.270 20:12:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:26.270 20:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.270 20:12:08 -- common/autotest_common.sh@10 -- # set +x 00:20:26.838 nvme0n1 00:20:26.838 20:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.838 20:12:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:26.838 20:12:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.838 20:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.838 20:12:08 -- common/autotest_common.sh@10 -- # set +x 00:20:26.838 20:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.838 20:12:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.838 20:12:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.838 20:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.838 20:12:08 -- common/autotest_common.sh@10 -- # set +x 00:20:26.838 20:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.838 20:12:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:26.838 20:12:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:26.838 20:12:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:26.838 20:12:08 -- host/auth.sh@44 -- # digest=sha256 00:20:26.838 20:12:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:26.838 20:12:08 -- host/auth.sh@44 -- # keyid=4 00:20:26.838 20:12:08 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:26.838 20:12:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:26.838 20:12:08 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:26.838 20:12:08 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:26.838 20:12:08 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:20:26.838 20:12:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:26.838 20:12:08 -- host/auth.sh@68 -- # digest=sha256 00:20:26.838 20:12:08 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:26.838 20:12:08 -- host/auth.sh@68 -- # keyid=4 00:20:26.838 20:12:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.838 20:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.838 20:12:08 -- common/autotest_common.sh@10 -- # set +x 00:20:26.838 20:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.838 20:12:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:26.838 20:12:08 -- nvmf/common.sh@717 -- # local ip 00:20:26.838 20:12:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:26.838 20:12:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:26.838 20:12:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.838 20:12:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.838 20:12:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:26.838 20:12:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.838 20:12:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:26.838 20:12:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:26.838 20:12:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:26.838 20:12:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:26.838 20:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.838 20:12:08 -- common/autotest_common.sh@10 -- # set +x 00:20:27.406 nvme0n1 00:20:27.406 20:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.406 20:12:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:27.406 20:12:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.406 20:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.406 20:12:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.406 20:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.406 20:12:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.406 20:12:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.406 20:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.406 20:12:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.406 20:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.406 20:12:09 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:20:27.406 20:12:09 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.406 20:12:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:27.406 20:12:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:27.406 20:12:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:27.406 20:12:09 -- host/auth.sh@44 -- # digest=sha384 00:20:27.406 20:12:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:27.406 20:12:09 -- host/auth.sh@44 -- # keyid=0 00:20:27.406 20:12:09 -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:27.406 20:12:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:27.406 20:12:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:27.406 20:12:09 -- host/auth.sh@49 -- # echo DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:27.406 20:12:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:20:27.406 20:12:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:27.406 20:12:09 -- host/auth.sh@68 -- # digest=sha384 00:20:27.406 20:12:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:27.406 20:12:09 -- host/auth.sh@68 -- # keyid=0 00:20:27.406 20:12:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.406 20:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.406 20:12:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.406 20:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.406 20:12:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:27.406 20:12:09 -- nvmf/common.sh@717 -- # local ip 00:20:27.406 20:12:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:27.406 20:12:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:27.406 20:12:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.406 20:12:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.406 20:12:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:27.406 20:12:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.406 20:12:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:27.406 20:12:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:27.406 20:12:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:27.406 20:12:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:27.406 20:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.406 20:12:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.683 nvme0n1 00:20:27.683 20:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.683 20:12:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.683 20:12:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:27.683 20:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.683 20:12:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.683 20:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.683 20:12:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.683 20:12:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.683 20:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.683 20:12:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.683 20:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.683 20:12:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:27.683 20:12:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:27.683 20:12:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:27.683 20:12:09 -- host/auth.sh@44 -- # digest=sha384 00:20:27.683 20:12:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:27.683 20:12:09 -- host/auth.sh@44 -- # keyid=1 00:20:27.683 20:12:09 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:27.683 20:12:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:27.683 20:12:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:27.683 20:12:09 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:27.683 20:12:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:20:27.683 20:12:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:27.683 20:12:09 -- host/auth.sh@68 -- # digest=sha384 00:20:27.683 20:12:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:27.683 20:12:09 -- host/auth.sh@68 -- # keyid=1 00:20:27.683 20:12:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.683 20:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.683 20:12:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.683 20:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.683 20:12:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:27.683 20:12:09 -- nvmf/common.sh@717 -- # local ip 00:20:27.683 20:12:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:27.683 20:12:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:27.683 20:12:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.683 20:12:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.683 20:12:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:27.683 20:12:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.683 20:12:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:27.683 20:12:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:27.683 20:12:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:27.683 20:12:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:27.683 20:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.683 20:12:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.683 nvme0n1 00:20:27.683 20:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.683 20:12:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.683 20:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.683 20:12:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.683 20:12:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:27.683 20:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.683 20:12:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.683 20:12:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.683 20:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.683 20:12:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.955 20:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.955 20:12:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:27.955 20:12:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:27.955 20:12:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:27.955 20:12:09 -- host/auth.sh@44 -- # digest=sha384 00:20:27.955 20:12:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:27.955 20:12:09 -- host/auth.sh@44 -- # keyid=2 00:20:27.955 20:12:09 -- host/auth.sh@45 -- # key=DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:27.955 20:12:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:27.955 20:12:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:27.955 20:12:09 -- host/auth.sh@49 -- # echo DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:27.955 20:12:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:20:27.955 20:12:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:27.955 20:12:09 -- host/auth.sh@68 -- # digest=sha384 00:20:27.955 20:12:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:27.955 20:12:09 -- host/auth.sh@68 -- # keyid=2 00:20:27.955 20:12:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.955 20:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.955 20:12:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.955 20:12:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.955 20:12:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:27.955 20:12:09 -- nvmf/common.sh@717 -- # local ip 00:20:27.955 20:12:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:27.955 20:12:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:27.955 20:12:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.955 20:12:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.955 20:12:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:27.955 20:12:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.955 20:12:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:27.955 20:12:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:27.955 20:12:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:27.955 20:12:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:27.955 20:12:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.955 20:12:09 -- common/autotest_common.sh@10 -- # set +x 00:20:27.955 nvme0n1 00:20:27.955 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.955 20:12:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:27.955 20:12:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.955 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.955 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:27.955 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.955 20:12:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.955 20:12:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.955 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.955 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:27.955 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.955 20:12:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:27.956 20:12:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:27.956 20:12:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:27.956 20:12:10 -- host/auth.sh@44 -- # digest=sha384 00:20:27.956 20:12:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:27.956 20:12:10 -- host/auth.sh@44 -- # keyid=3 00:20:27.956 20:12:10 -- host/auth.sh@45 -- # key=DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:27.956 20:12:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:27.956 20:12:10 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:27.956 20:12:10 -- host/auth.sh@49 -- # echo DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:27.956 20:12:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:20:27.956 20:12:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:27.956 20:12:10 -- host/auth.sh@68 -- # digest=sha384 00:20:27.956 20:12:10 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:27.956 20:12:10 -- host/auth.sh@68 -- # keyid=3 00:20:27.956 20:12:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.956 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.956 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:27.956 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.956 20:12:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:27.956 20:12:10 -- nvmf/common.sh@717 -- # local ip 00:20:27.956 20:12:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:27.956 20:12:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:27.956 20:12:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.956 20:12:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.956 20:12:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:27.956 20:12:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.956 20:12:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:27.956 20:12:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:27.956 20:12:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:27.956 20:12:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:27.956 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.956 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.216 nvme0n1 00:20:28.216 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.216 20:12:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:28.216 20:12:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.216 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.216 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.216 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.216 20:12:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.216 20:12:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.216 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.216 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.216 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.216 20:12:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:28.216 20:12:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:28.216 20:12:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:28.216 20:12:10 -- host/auth.sh@44 -- # digest=sha384 00:20:28.216 20:12:10 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:28.216 20:12:10 -- host/auth.sh@44 -- # keyid=4 00:20:28.216 20:12:10 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:28.216 20:12:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:28.216 20:12:10 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:28.216 20:12:10 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:28.216 20:12:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:20:28.216 20:12:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:28.216 20:12:10 -- host/auth.sh@68 -- # digest=sha384 00:20:28.216 20:12:10 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:28.216 20:12:10 -- host/auth.sh@68 -- # keyid=4 00:20:28.216 20:12:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.216 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.216 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.216 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.216 20:12:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:28.216 20:12:10 -- nvmf/common.sh@717 -- # local ip 00:20:28.216 20:12:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:28.216 20:12:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:28.216 20:12:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.216 20:12:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.216 20:12:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:28.216 20:12:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.216 20:12:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:28.216 20:12:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:28.216 20:12:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:28.216 20:12:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:28.216 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.216 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.216 nvme0n1 00:20:28.216 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.216 20:12:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:28.216 20:12:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.216 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.216 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.216 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.216 20:12:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.216 20:12:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.216 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.216 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.216 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.216 20:12:10 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.216 20:12:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:28.216 20:12:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:28.216 20:12:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:28.216 20:12:10 -- host/auth.sh@44 -- # digest=sha384 00:20:28.216 20:12:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:28.216 20:12:10 -- host/auth.sh@44 -- # keyid=0 00:20:28.216 20:12:10 -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:28.216 20:12:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:28.216 20:12:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:28.216 20:12:10 -- host/auth.sh@49 -- # echo DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:28.216 20:12:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:20:28.216 20:12:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:28.216 20:12:10 -- host/auth.sh@68 -- # digest=sha384 00:20:28.216 20:12:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:28.216 20:12:10 -- host/auth.sh@68 -- # keyid=0 00:20:28.216 20:12:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.216 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.216 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.216 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.216 20:12:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:28.216 20:12:10 -- nvmf/common.sh@717 -- # local ip 00:20:28.216 20:12:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:28.216 20:12:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:28.216 20:12:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.216 20:12:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.216 20:12:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:28.216 20:12:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.216 20:12:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:28.216 20:12:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:28.216 20:12:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:28.216 20:12:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:28.216 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.216 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.475 nvme0n1 00:20:28.475 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.475 20:12:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.475 20:12:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:28.475 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.475 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.475 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.475 20:12:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.475 20:12:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.475 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.475 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.475 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.475 20:12:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:28.475 20:12:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:28.475 20:12:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:28.475 20:12:10 -- host/auth.sh@44 -- # digest=sha384 00:20:28.475 20:12:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:28.475 20:12:10 -- host/auth.sh@44 -- # keyid=1 00:20:28.475 20:12:10 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:28.475 20:12:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:28.475 20:12:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:28.475 20:12:10 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:28.475 20:12:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:20:28.475 20:12:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:28.475 20:12:10 -- host/auth.sh@68 -- # digest=sha384 00:20:28.475 20:12:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:28.475 20:12:10 -- host/auth.sh@68 -- # keyid=1 00:20:28.475 20:12:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.475 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.475 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.475 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.475 20:12:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:28.475 20:12:10 -- nvmf/common.sh@717 -- # local ip 00:20:28.475 20:12:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:28.475 20:12:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:28.475 20:12:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.475 20:12:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.475 20:12:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:28.475 20:12:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.475 20:12:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:28.475 20:12:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:28.475 20:12:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:28.475 20:12:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:28.475 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.475 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.783 nvme0n1 00:20:28.783 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.783 20:12:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.783 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.783 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.783 20:12:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:28.783 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.783 20:12:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.783 20:12:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.783 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.783 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.783 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.783 20:12:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:28.783 20:12:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:28.783 20:12:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:28.783 20:12:10 -- host/auth.sh@44 -- # digest=sha384 00:20:28.783 20:12:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:28.783 20:12:10 -- host/auth.sh@44 -- # keyid=2 00:20:28.783 20:12:10 -- host/auth.sh@45 -- # key=DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:28.783 20:12:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:28.783 20:12:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:28.783 20:12:10 -- host/auth.sh@49 -- # echo DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:28.783 20:12:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:20:28.783 20:12:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:28.783 20:12:10 -- host/auth.sh@68 -- # digest=sha384 00:20:28.783 20:12:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:28.783 20:12:10 -- host/auth.sh@68 -- # keyid=2 00:20:28.783 20:12:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:28.783 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.783 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.783 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.783 20:12:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:28.783 20:12:10 -- nvmf/common.sh@717 -- # local ip 00:20:28.783 20:12:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:28.783 20:12:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:28.783 20:12:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.783 20:12:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.783 20:12:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:28.783 20:12:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.783 20:12:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:28.783 20:12:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:28.783 20:12:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:28.783 20:12:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:28.783 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.783 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.783 nvme0n1 00:20:28.783 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.783 20:12:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.783 20:12:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:28.783 20:12:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.783 20:12:10 -- common/autotest_common.sh@10 -- # set +x 00:20:28.783 20:12:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:28.783 20:12:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.783 20:12:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.783 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:28.783 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.041 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.041 20:12:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:29.041 20:12:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:29.041 20:12:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:29.041 20:12:11 -- host/auth.sh@44 -- # digest=sha384 00:20:29.041 20:12:11 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:29.041 20:12:11 -- host/auth.sh@44 -- # keyid=3 00:20:29.041 20:12:11 -- host/auth.sh@45 -- # key=DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:29.041 20:12:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:29.041 20:12:11 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:29.041 20:12:11 -- host/auth.sh@49 -- # echo DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:29.041 20:12:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:20:29.041 20:12:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:29.041 20:12:11 -- host/auth.sh@68 -- # digest=sha384 00:20:29.041 20:12:11 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:29.041 20:12:11 -- host/auth.sh@68 -- # keyid=3 00:20:29.041 20:12:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.041 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.041 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.041 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.041 20:12:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:29.041 20:12:11 -- nvmf/common.sh@717 -- # local ip 00:20:29.041 20:12:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:29.041 20:12:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:29.041 20:12:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.041 20:12:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.041 20:12:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:29.041 20:12:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.041 20:12:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:29.041 20:12:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:29.041 20:12:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:29.041 20:12:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:29.041 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.041 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.041 nvme0n1 00:20:29.041 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.041 20:12:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.041 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.041 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.041 20:12:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:29.041 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.041 20:12:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.041 20:12:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.041 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.041 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.041 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.041 20:12:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:29.041 20:12:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:29.041 20:12:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:29.041 20:12:11 -- host/auth.sh@44 -- # digest=sha384 00:20:29.041 20:12:11 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:29.041 20:12:11 -- host/auth.sh@44 -- # keyid=4 00:20:29.041 20:12:11 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:29.041 20:12:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:29.041 20:12:11 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:29.041 20:12:11 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:29.041 20:12:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:20:29.041 20:12:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:29.041 20:12:11 -- host/auth.sh@68 -- # digest=sha384 00:20:29.041 20:12:11 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:29.041 20:12:11 -- host/auth.sh@68 -- # keyid=4 00:20:29.041 20:12:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.041 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.041 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.041 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.041 20:12:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:29.041 20:12:11 -- nvmf/common.sh@717 -- # local ip 00:20:29.041 20:12:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:29.041 20:12:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:29.041 20:12:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.041 20:12:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.041 20:12:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:29.041 20:12:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.041 20:12:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:29.041 20:12:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:29.041 20:12:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:29.041 20:12:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:29.041 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.041 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.299 nvme0n1 00:20:29.299 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.299 20:12:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.299 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.299 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.299 20:12:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:29.299 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.299 20:12:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.299 20:12:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.299 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.299 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.299 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.299 20:12:11 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.299 20:12:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:29.299 20:12:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:29.299 20:12:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:29.299 20:12:11 -- host/auth.sh@44 -- # digest=sha384 00:20:29.299 20:12:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:29.299 20:12:11 -- host/auth.sh@44 -- # keyid=0 00:20:29.299 20:12:11 -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:29.299 20:12:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:29.299 20:12:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:29.299 20:12:11 -- host/auth.sh@49 -- # echo DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:29.299 20:12:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:20:29.299 20:12:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:29.299 20:12:11 -- host/auth.sh@68 -- # digest=sha384 00:20:29.299 20:12:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:29.299 20:12:11 -- host/auth.sh@68 -- # keyid=0 00:20:29.299 20:12:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.299 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.299 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.299 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.299 20:12:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:29.299 20:12:11 -- nvmf/common.sh@717 -- # local ip 00:20:29.299 20:12:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:29.299 20:12:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:29.299 20:12:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.299 20:12:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.299 20:12:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:29.299 20:12:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.299 20:12:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:29.299 20:12:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:29.299 20:12:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:29.300 20:12:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:29.300 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.300 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.558 nvme0n1 00:20:29.558 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.558 20:12:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:29.558 20:12:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.558 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.558 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.558 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.558 20:12:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.558 20:12:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.558 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.558 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.558 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.558 20:12:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:29.558 20:12:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:29.558 20:12:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:29.558 20:12:11 -- host/auth.sh@44 -- # digest=sha384 00:20:29.558 20:12:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:29.558 20:12:11 -- host/auth.sh@44 -- # keyid=1 00:20:29.558 20:12:11 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:29.558 20:12:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:29.558 20:12:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:29.558 20:12:11 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:29.558 20:12:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:20:29.558 20:12:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:29.558 20:12:11 -- host/auth.sh@68 -- # digest=sha384 00:20:29.558 20:12:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:29.558 20:12:11 -- host/auth.sh@68 -- # keyid=1 00:20:29.558 20:12:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.558 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.558 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.558 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.558 20:12:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:29.558 20:12:11 -- nvmf/common.sh@717 -- # local ip 00:20:29.558 20:12:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:29.558 20:12:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:29.558 20:12:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.558 20:12:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.558 20:12:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:29.558 20:12:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.558 20:12:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:29.558 20:12:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:29.558 20:12:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:29.558 20:12:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:29.558 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.558 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.818 nvme0n1 00:20:29.818 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.818 20:12:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.818 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.818 20:12:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:29.818 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.818 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.818 20:12:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.818 20:12:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.818 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.818 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.818 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.818 20:12:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:29.818 20:12:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:29.819 20:12:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:29.819 20:12:11 -- host/auth.sh@44 -- # digest=sha384 00:20:29.819 20:12:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:29.819 20:12:11 -- host/auth.sh@44 -- # keyid=2 00:20:29.819 20:12:11 -- host/auth.sh@45 -- # key=DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:29.819 20:12:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:29.819 20:12:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:29.819 20:12:11 -- host/auth.sh@49 -- # echo DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:29.819 20:12:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:20:29.819 20:12:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:29.819 20:12:11 -- host/auth.sh@68 -- # digest=sha384 00:20:29.819 20:12:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:29.819 20:12:11 -- host/auth.sh@68 -- # keyid=2 00:20:29.819 20:12:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:29.819 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.819 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:29.819 20:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.819 20:12:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:29.819 20:12:11 -- nvmf/common.sh@717 -- # local ip 00:20:29.819 20:12:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:29.819 20:12:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:29.819 20:12:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.819 20:12:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.819 20:12:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:29.819 20:12:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.819 20:12:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:29.819 20:12:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:29.819 20:12:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:29.819 20:12:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:29.819 20:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.819 20:12:11 -- common/autotest_common.sh@10 -- # set +x 00:20:30.076 nvme0n1 00:20:30.076 20:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.076 20:12:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:30.076 20:12:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.076 20:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.076 20:12:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.076 20:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.076 20:12:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.076 20:12:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.076 20:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.076 20:12:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.076 20:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.076 20:12:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:30.076 20:12:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:30.076 20:12:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:30.076 20:12:12 -- host/auth.sh@44 -- # digest=sha384 00:20:30.076 20:12:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:30.076 20:12:12 -- host/auth.sh@44 -- # keyid=3 00:20:30.076 20:12:12 -- host/auth.sh@45 -- # key=DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:30.076 20:12:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:30.076 20:12:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:30.076 20:12:12 -- host/auth.sh@49 -- # echo DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:30.076 20:12:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:20:30.076 20:12:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:30.076 20:12:12 -- host/auth.sh@68 -- # digest=sha384 00:20:30.076 20:12:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:30.076 20:12:12 -- host/auth.sh@68 -- # keyid=3 00:20:30.076 20:12:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.076 20:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.076 20:12:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.076 20:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.076 20:12:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:30.076 20:12:12 -- nvmf/common.sh@717 -- # local ip 00:20:30.076 20:12:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:30.076 20:12:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:30.076 20:12:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.076 20:12:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.076 20:12:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:30.076 20:12:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.076 20:12:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:30.076 20:12:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:30.076 20:12:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:30.076 20:12:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:30.076 20:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.076 20:12:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.334 nvme0n1 00:20:30.334 20:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.334 20:12:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.334 20:12:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:30.334 20:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.334 20:12:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.334 20:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.334 20:12:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.334 20:12:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.334 20:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.334 20:12:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.334 20:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.334 20:12:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:30.334 20:12:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:30.334 20:12:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:30.334 20:12:12 -- host/auth.sh@44 -- # digest=sha384 00:20:30.334 20:12:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:30.334 20:12:12 -- host/auth.sh@44 -- # keyid=4 00:20:30.334 20:12:12 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:30.334 20:12:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:30.334 20:12:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:30.334 20:12:12 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:30.334 20:12:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:20:30.334 20:12:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:30.334 20:12:12 -- host/auth.sh@68 -- # digest=sha384 00:20:30.334 20:12:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:30.334 20:12:12 -- host/auth.sh@68 -- # keyid=4 00:20:30.334 20:12:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.334 20:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.334 20:12:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.334 20:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.334 20:12:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:30.334 20:12:12 -- nvmf/common.sh@717 -- # local ip 00:20:30.334 20:12:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:30.334 20:12:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:30.334 20:12:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.334 20:12:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.334 20:12:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:30.334 20:12:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.334 20:12:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:30.334 20:12:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:30.334 20:12:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:30.334 20:12:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:30.334 20:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.334 20:12:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.592 nvme0n1 00:20:30.592 20:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.592 20:12:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.592 20:12:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:30.592 20:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.592 20:12:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.592 20:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.592 20:12:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.592 20:12:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.592 20:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.592 20:12:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.592 20:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.592 20:12:12 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.592 20:12:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:30.592 20:12:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:30.592 20:12:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:30.592 20:12:12 -- host/auth.sh@44 -- # digest=sha384 00:20:30.592 20:12:12 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:30.592 20:12:12 -- host/auth.sh@44 -- # keyid=0 00:20:30.592 20:12:12 -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:30.592 20:12:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:30.592 20:12:12 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:30.592 20:12:12 -- host/auth.sh@49 -- # echo DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:30.592 20:12:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:20:30.592 20:12:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:30.592 20:12:12 -- host/auth.sh@68 -- # digest=sha384 00:20:30.592 20:12:12 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:30.592 20:12:12 -- host/auth.sh@68 -- # keyid=0 00:20:30.592 20:12:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:30.592 20:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.592 20:12:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.592 20:12:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.592 20:12:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:30.592 20:12:12 -- nvmf/common.sh@717 -- # local ip 00:20:30.592 20:12:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:30.592 20:12:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:30.592 20:12:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.592 20:12:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.592 20:12:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:30.592 20:12:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.592 20:12:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:30.592 20:12:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:30.592 20:12:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:30.592 20:12:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:30.592 20:12:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.592 20:12:12 -- common/autotest_common.sh@10 -- # set +x 00:20:30.850 nvme0n1 00:20:30.850 20:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.850 20:12:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.850 20:12:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:30.850 20:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.850 20:12:13 -- common/autotest_common.sh@10 -- # set +x 00:20:30.850 20:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:30.850 20:12:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.850 20:12:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.850 20:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:30.850 20:12:13 -- common/autotest_common.sh@10 -- # set +x 00:20:31.108 20:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.108 20:12:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:31.108 20:12:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:31.108 20:12:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:31.108 20:12:13 -- host/auth.sh@44 -- # digest=sha384 00:20:31.108 20:12:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:31.108 20:12:13 -- host/auth.sh@44 -- # keyid=1 00:20:31.108 20:12:13 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:31.108 20:12:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:31.108 20:12:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:31.108 20:12:13 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:31.108 20:12:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:20:31.108 20:12:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:31.108 20:12:13 -- host/auth.sh@68 -- # digest=sha384 00:20:31.108 20:12:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:31.108 20:12:13 -- host/auth.sh@68 -- # keyid=1 00:20:31.108 20:12:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.108 20:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.108 20:12:13 -- common/autotest_common.sh@10 -- # set +x 00:20:31.108 20:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.108 20:12:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:31.108 20:12:13 -- nvmf/common.sh@717 -- # local ip 00:20:31.108 20:12:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:31.108 20:12:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:31.108 20:12:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.108 20:12:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.108 20:12:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:31.108 20:12:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.108 20:12:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:31.108 20:12:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:31.108 20:12:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:31.108 20:12:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:31.108 20:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.108 20:12:13 -- common/autotest_common.sh@10 -- # set +x 00:20:31.367 nvme0n1 00:20:31.367 20:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.367 20:12:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.367 20:12:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:31.367 20:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.367 20:12:13 -- common/autotest_common.sh@10 -- # set +x 00:20:31.367 20:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.367 20:12:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.367 20:12:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.367 20:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.367 20:12:13 -- common/autotest_common.sh@10 -- # set +x 00:20:31.367 20:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.367 20:12:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:31.367 20:12:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:31.367 20:12:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:31.367 20:12:13 -- host/auth.sh@44 -- # digest=sha384 00:20:31.367 20:12:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:31.367 20:12:13 -- host/auth.sh@44 -- # keyid=2 00:20:31.367 20:12:13 -- host/auth.sh@45 -- # key=DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:31.367 20:12:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:31.367 20:12:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:31.367 20:12:13 -- host/auth.sh@49 -- # echo DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:31.367 20:12:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:20:31.367 20:12:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:31.367 20:12:13 -- host/auth.sh@68 -- # digest=sha384 00:20:31.367 20:12:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:31.367 20:12:13 -- host/auth.sh@68 -- # keyid=2 00:20:31.367 20:12:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.367 20:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.367 20:12:13 -- common/autotest_common.sh@10 -- # set +x 00:20:31.367 20:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.367 20:12:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:31.367 20:12:13 -- nvmf/common.sh@717 -- # local ip 00:20:31.367 20:12:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:31.367 20:12:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:31.367 20:12:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.367 20:12:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.367 20:12:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:31.368 20:12:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.368 20:12:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:31.368 20:12:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:31.368 20:12:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:31.368 20:12:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:31.368 20:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.368 20:12:13 -- common/autotest_common.sh@10 -- # set +x 00:20:31.626 nvme0n1 00:20:31.626 20:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.626 20:12:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.626 20:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.626 20:12:13 -- common/autotest_common.sh@10 -- # set +x 00:20:31.626 20:12:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:31.626 20:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.884 20:12:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.884 20:12:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.884 20:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.884 20:12:13 -- common/autotest_common.sh@10 -- # set +x 00:20:31.884 20:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.884 20:12:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:31.884 20:12:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:31.884 20:12:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:31.884 20:12:13 -- host/auth.sh@44 -- # digest=sha384 00:20:31.884 20:12:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:31.884 20:12:13 -- host/auth.sh@44 -- # keyid=3 00:20:31.884 20:12:13 -- host/auth.sh@45 -- # key=DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:31.884 20:12:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:31.884 20:12:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:31.884 20:12:13 -- host/auth.sh@49 -- # echo DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:31.884 20:12:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:20:31.884 20:12:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:31.884 20:12:13 -- host/auth.sh@68 -- # digest=sha384 00:20:31.884 20:12:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:31.884 20:12:13 -- host/auth.sh@68 -- # keyid=3 00:20:31.884 20:12:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.884 20:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.884 20:12:13 -- common/autotest_common.sh@10 -- # set +x 00:20:31.884 20:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.885 20:12:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:31.885 20:12:13 -- nvmf/common.sh@717 -- # local ip 00:20:31.885 20:12:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:31.885 20:12:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:31.885 20:12:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.885 20:12:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.885 20:12:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:31.885 20:12:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.885 20:12:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:31.885 20:12:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:31.885 20:12:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:31.885 20:12:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:31.885 20:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.885 20:12:13 -- common/autotest_common.sh@10 -- # set +x 00:20:32.147 nvme0n1 00:20:32.147 20:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.147 20:12:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.147 20:12:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:32.147 20:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.147 20:12:14 -- common/autotest_common.sh@10 -- # set +x 00:20:32.147 20:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.147 20:12:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.147 20:12:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.147 20:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.147 20:12:14 -- common/autotest_common.sh@10 -- # set +x 00:20:32.147 20:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.147 20:12:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:32.147 20:12:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:32.147 20:12:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:32.147 20:12:14 -- host/auth.sh@44 -- # digest=sha384 00:20:32.147 20:12:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:32.147 20:12:14 -- host/auth.sh@44 -- # keyid=4 00:20:32.147 20:12:14 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:32.147 20:12:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:32.147 20:12:14 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:32.147 20:12:14 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:32.147 20:12:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:20:32.147 20:12:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:32.147 20:12:14 -- host/auth.sh@68 -- # digest=sha384 00:20:32.147 20:12:14 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:32.147 20:12:14 -- host/auth.sh@68 -- # keyid=4 00:20:32.147 20:12:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:32.147 20:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.147 20:12:14 -- common/autotest_common.sh@10 -- # set +x 00:20:32.147 20:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.147 20:12:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:32.147 20:12:14 -- nvmf/common.sh@717 -- # local ip 00:20:32.147 20:12:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:32.147 20:12:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:32.147 20:12:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.147 20:12:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.148 20:12:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:32.148 20:12:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.148 20:12:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:32.148 20:12:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:32.148 20:12:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:32.148 20:12:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:32.148 20:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.148 20:12:14 -- common/autotest_common.sh@10 -- # set +x 00:20:32.424 nvme0n1 00:20:32.424 20:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.424 20:12:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:32.424 20:12:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.424 20:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.424 20:12:14 -- common/autotest_common.sh@10 -- # set +x 00:20:32.424 20:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.424 20:12:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.424 20:12:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.424 20:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.424 20:12:14 -- common/autotest_common.sh@10 -- # set +x 00:20:32.424 20:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.424 20:12:14 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.424 20:12:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:32.424 20:12:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:32.424 20:12:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:32.424 20:12:14 -- host/auth.sh@44 -- # digest=sha384 00:20:32.424 20:12:14 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:32.424 20:12:14 -- host/auth.sh@44 -- # keyid=0 00:20:32.424 20:12:14 -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:32.424 20:12:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:32.424 20:12:14 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:32.424 20:12:14 -- host/auth.sh@49 -- # echo DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:32.424 20:12:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:20:32.424 20:12:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:32.424 20:12:14 -- host/auth.sh@68 -- # digest=sha384 00:20:32.424 20:12:14 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:32.424 20:12:14 -- host/auth.sh@68 -- # keyid=0 00:20:32.424 20:12:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:32.424 20:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.424 20:12:14 -- common/autotest_common.sh@10 -- # set +x 00:20:32.424 20:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.424 20:12:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:32.424 20:12:14 -- nvmf/common.sh@717 -- # local ip 00:20:32.424 20:12:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:32.424 20:12:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:32.424 20:12:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.424 20:12:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.424 20:12:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:32.424 20:12:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.424 20:12:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:32.424 20:12:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:32.424 20:12:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:32.424 20:12:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:32.424 20:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.424 20:12:14 -- common/autotest_common.sh@10 -- # set +x 00:20:32.991 nvme0n1 00:20:32.991 20:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.991 20:12:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.991 20:12:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:32.991 20:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.991 20:12:15 -- common/autotest_common.sh@10 -- # set +x 00:20:32.991 20:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.991 20:12:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.991 20:12:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.991 20:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.991 20:12:15 -- common/autotest_common.sh@10 -- # set +x 00:20:32.991 20:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.991 20:12:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:32.991 20:12:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:32.991 20:12:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:32.991 20:12:15 -- host/auth.sh@44 -- # digest=sha384 00:20:32.991 20:12:15 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:32.991 20:12:15 -- host/auth.sh@44 -- # keyid=1 00:20:32.991 20:12:15 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:32.991 20:12:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:32.991 20:12:15 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:32.991 20:12:15 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:32.991 20:12:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:20:32.991 20:12:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:32.991 20:12:15 -- host/auth.sh@68 -- # digest=sha384 00:20:32.991 20:12:15 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:32.991 20:12:15 -- host/auth.sh@68 -- # keyid=1 00:20:32.991 20:12:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:32.991 20:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.991 20:12:15 -- common/autotest_common.sh@10 -- # set +x 00:20:33.249 20:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.249 20:12:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:33.249 20:12:15 -- nvmf/common.sh@717 -- # local ip 00:20:33.249 20:12:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:33.249 20:12:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:33.249 20:12:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.249 20:12:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.249 20:12:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:33.249 20:12:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.249 20:12:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:33.249 20:12:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:33.249 20:12:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:33.249 20:12:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:33.249 20:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.249 20:12:15 -- common/autotest_common.sh@10 -- # set +x 00:20:33.507 nvme0n1 00:20:33.507 20:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.507 20:12:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:33.507 20:12:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.507 20:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.507 20:12:15 -- common/autotest_common.sh@10 -- # set +x 00:20:33.765 20:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.765 20:12:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.765 20:12:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.765 20:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.765 20:12:15 -- common/autotest_common.sh@10 -- # set +x 00:20:33.765 20:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.765 20:12:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:33.765 20:12:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:33.765 20:12:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:33.765 20:12:15 -- host/auth.sh@44 -- # digest=sha384 00:20:33.765 20:12:15 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:33.765 20:12:15 -- host/auth.sh@44 -- # keyid=2 00:20:33.765 20:12:15 -- host/auth.sh@45 -- # key=DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:33.765 20:12:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:33.765 20:12:15 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:33.765 20:12:15 -- host/auth.sh@49 -- # echo DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:33.765 20:12:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:20:33.765 20:12:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:33.765 20:12:15 -- host/auth.sh@68 -- # digest=sha384 00:20:33.765 20:12:15 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:33.765 20:12:15 -- host/auth.sh@68 -- # keyid=2 00:20:33.765 20:12:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:33.765 20:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.765 20:12:15 -- common/autotest_common.sh@10 -- # set +x 00:20:33.765 20:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:33.765 20:12:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:33.765 20:12:15 -- nvmf/common.sh@717 -- # local ip 00:20:33.765 20:12:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:33.765 20:12:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:33.765 20:12:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.765 20:12:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.765 20:12:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:33.765 20:12:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.765 20:12:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:33.765 20:12:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:33.765 20:12:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:33.765 20:12:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:33.765 20:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:33.765 20:12:15 -- common/autotest_common.sh@10 -- # set +x 00:20:34.334 nvme0n1 00:20:34.334 20:12:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.334 20:12:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.334 20:12:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:34.334 20:12:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.334 20:12:16 -- common/autotest_common.sh@10 -- # set +x 00:20:34.334 20:12:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.334 20:12:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.334 20:12:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.334 20:12:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.334 20:12:16 -- common/autotest_common.sh@10 -- # set +x 00:20:34.334 20:12:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.334 20:12:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:34.334 20:12:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:34.334 20:12:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:34.334 20:12:16 -- host/auth.sh@44 -- # digest=sha384 00:20:34.334 20:12:16 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:34.334 20:12:16 -- host/auth.sh@44 -- # keyid=3 00:20:34.334 20:12:16 -- host/auth.sh@45 -- # key=DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:34.334 20:12:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:34.334 20:12:16 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:34.334 20:12:16 -- host/auth.sh@49 -- # echo DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:34.334 20:12:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:20:34.334 20:12:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:34.334 20:12:16 -- host/auth.sh@68 -- # digest=sha384 00:20:34.334 20:12:16 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:34.334 20:12:16 -- host/auth.sh@68 -- # keyid=3 00:20:34.334 20:12:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:34.334 20:12:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.334 20:12:16 -- common/autotest_common.sh@10 -- # set +x 00:20:34.334 20:12:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.334 20:12:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:34.334 20:12:16 -- nvmf/common.sh@717 -- # local ip 00:20:34.334 20:12:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:34.334 20:12:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:34.334 20:12:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.334 20:12:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.334 20:12:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:34.334 20:12:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.334 20:12:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:34.334 20:12:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:34.334 20:12:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:34.334 20:12:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:34.334 20:12:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.334 20:12:16 -- common/autotest_common.sh@10 -- # set +x 00:20:34.901 nvme0n1 00:20:34.901 20:12:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.901 20:12:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.901 20:12:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.901 20:12:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:34.901 20:12:16 -- common/autotest_common.sh@10 -- # set +x 00:20:34.901 20:12:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.901 20:12:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.901 20:12:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.901 20:12:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.901 20:12:16 -- common/autotest_common.sh@10 -- # set +x 00:20:34.901 20:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.901 20:12:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:34.901 20:12:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:34.901 20:12:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:34.901 20:12:17 -- host/auth.sh@44 -- # digest=sha384 00:20:34.901 20:12:17 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:34.901 20:12:17 -- host/auth.sh@44 -- # keyid=4 00:20:34.901 20:12:17 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:34.901 20:12:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:20:34.901 20:12:17 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:34.901 20:12:17 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:34.901 20:12:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:20:34.901 20:12:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:34.901 20:12:17 -- host/auth.sh@68 -- # digest=sha384 00:20:34.901 20:12:17 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:34.901 20:12:17 -- host/auth.sh@68 -- # keyid=4 00:20:34.901 20:12:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:34.901 20:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.901 20:12:17 -- common/autotest_common.sh@10 -- # set +x 00:20:34.901 20:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:34.901 20:12:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:34.901 20:12:17 -- nvmf/common.sh@717 -- # local ip 00:20:34.901 20:12:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:34.901 20:12:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:34.901 20:12:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.901 20:12:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.901 20:12:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:34.901 20:12:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.901 20:12:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:34.901 20:12:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:34.901 20:12:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:34.901 20:12:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:34.901 20:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:34.901 20:12:17 -- common/autotest_common.sh@10 -- # set +x 00:20:35.482 nvme0n1 00:20:35.482 20:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.482 20:12:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.482 20:12:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:35.482 20:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.482 20:12:17 -- common/autotest_common.sh@10 -- # set +x 00:20:35.482 20:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.482 20:12:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.482 20:12:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.482 20:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.482 20:12:17 -- common/autotest_common.sh@10 -- # set +x 00:20:35.482 20:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.482 20:12:17 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:20:35.482 20:12:17 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.482 20:12:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:35.482 20:12:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:35.482 20:12:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:35.482 20:12:17 -- host/auth.sh@44 -- # digest=sha512 00:20:35.482 20:12:17 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:35.482 20:12:17 -- host/auth.sh@44 -- # keyid=0 00:20:35.482 20:12:17 -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:35.482 20:12:17 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:35.482 20:12:17 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:35.482 20:12:17 -- host/auth.sh@49 -- # echo DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:35.482 20:12:17 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:20:35.482 20:12:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:35.482 20:12:17 -- host/auth.sh@68 -- # digest=sha512 00:20:35.482 20:12:17 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:35.482 20:12:17 -- host/auth.sh@68 -- # keyid=0 00:20:35.482 20:12:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:35.482 20:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.482 20:12:17 -- common/autotest_common.sh@10 -- # set +x 00:20:35.482 20:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.482 20:12:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:35.482 20:12:17 -- nvmf/common.sh@717 -- # local ip 00:20:35.482 20:12:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:35.482 20:12:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:35.482 20:12:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.482 20:12:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.482 20:12:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:35.482 20:12:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.482 20:12:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:35.482 20:12:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:35.482 20:12:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:35.483 20:12:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:35.483 20:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.483 20:12:17 -- common/autotest_common.sh@10 -- # set +x 00:20:35.483 nvme0n1 00:20:35.483 20:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.742 20:12:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.742 20:12:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:35.742 20:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.742 20:12:17 -- common/autotest_common.sh@10 -- # set +x 00:20:35.742 20:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.742 20:12:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.742 20:12:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.742 20:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.742 20:12:17 -- common/autotest_common.sh@10 -- # set +x 00:20:35.742 20:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.742 20:12:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:35.742 20:12:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:35.742 20:12:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:35.742 20:12:17 -- host/auth.sh@44 -- # digest=sha512 00:20:35.742 20:12:17 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:35.742 20:12:17 -- host/auth.sh@44 -- # keyid=1 00:20:35.742 20:12:17 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:35.742 20:12:17 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:35.742 20:12:17 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:35.742 20:12:17 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:35.742 20:12:17 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:20:35.742 20:12:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:35.742 20:12:17 -- host/auth.sh@68 -- # digest=sha512 00:20:35.742 20:12:17 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:35.742 20:12:17 -- host/auth.sh@68 -- # keyid=1 00:20:35.742 20:12:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:35.742 20:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.742 20:12:17 -- common/autotest_common.sh@10 -- # set +x 00:20:35.742 20:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.742 20:12:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:35.742 20:12:17 -- nvmf/common.sh@717 -- # local ip 00:20:35.742 20:12:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:35.742 20:12:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:35.742 20:12:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.742 20:12:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.742 20:12:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:35.742 20:12:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.742 20:12:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:35.742 20:12:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:35.742 20:12:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:35.742 20:12:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:35.742 20:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.742 20:12:17 -- common/autotest_common.sh@10 -- # set +x 00:20:35.742 nvme0n1 00:20:35.742 20:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.742 20:12:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.742 20:12:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:35.742 20:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.742 20:12:17 -- common/autotest_common.sh@10 -- # set +x 00:20:35.742 20:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.742 20:12:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.742 20:12:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.742 20:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.742 20:12:17 -- common/autotest_common.sh@10 -- # set +x 00:20:35.742 20:12:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.742 20:12:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:35.742 20:12:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:35.742 20:12:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:35.742 20:12:17 -- host/auth.sh@44 -- # digest=sha512 00:20:35.742 20:12:17 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:35.742 20:12:17 -- host/auth.sh@44 -- # keyid=2 00:20:35.742 20:12:17 -- host/auth.sh@45 -- # key=DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:35.742 20:12:17 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:35.742 20:12:17 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:35.742 20:12:17 -- host/auth.sh@49 -- # echo DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:35.742 20:12:17 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:20:35.742 20:12:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:35.742 20:12:17 -- host/auth.sh@68 -- # digest=sha512 00:20:35.742 20:12:17 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:35.742 20:12:17 -- host/auth.sh@68 -- # keyid=2 00:20:35.742 20:12:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:35.742 20:12:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.742 20:12:17 -- common/autotest_common.sh@10 -- # set +x 00:20:36.000 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.000 20:12:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:36.000 20:12:18 -- nvmf/common.sh@717 -- # local ip 00:20:36.000 20:12:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:36.000 20:12:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:36.000 20:12:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.000 20:12:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.000 20:12:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:36.000 20:12:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.000 20:12:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:36.000 20:12:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:36.000 20:12:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:36.000 20:12:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:36.000 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.000 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.000 nvme0n1 00:20:36.000 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.000 20:12:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.000 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.000 20:12:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:36.000 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.000 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.000 20:12:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.000 20:12:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.000 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.000 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.000 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.000 20:12:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:36.000 20:12:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:36.000 20:12:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:36.000 20:12:18 -- host/auth.sh@44 -- # digest=sha512 00:20:36.000 20:12:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:36.000 20:12:18 -- host/auth.sh@44 -- # keyid=3 00:20:36.000 20:12:18 -- host/auth.sh@45 -- # key=DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:36.000 20:12:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:36.000 20:12:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:36.000 20:12:18 -- host/auth.sh@49 -- # echo DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:36.000 20:12:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:20:36.000 20:12:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:36.000 20:12:18 -- host/auth.sh@68 -- # digest=sha512 00:20:36.000 20:12:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:36.000 20:12:18 -- host/auth.sh@68 -- # keyid=3 00:20:36.000 20:12:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:36.000 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.000 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.000 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.000 20:12:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:36.000 20:12:18 -- nvmf/common.sh@717 -- # local ip 00:20:36.000 20:12:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:36.000 20:12:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:36.000 20:12:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.000 20:12:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.000 20:12:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:36.000 20:12:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.000 20:12:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:36.000 20:12:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:36.000 20:12:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:36.000 20:12:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:36.000 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.000 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.296 nvme0n1 00:20:36.296 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.296 20:12:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.296 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.296 20:12:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:36.296 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.296 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.296 20:12:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.296 20:12:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.296 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.296 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.296 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.296 20:12:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:36.296 20:12:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:36.296 20:12:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:36.296 20:12:18 -- host/auth.sh@44 -- # digest=sha512 00:20:36.296 20:12:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:36.296 20:12:18 -- host/auth.sh@44 -- # keyid=4 00:20:36.296 20:12:18 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:36.296 20:12:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:36.296 20:12:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:36.296 20:12:18 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:36.296 20:12:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:20:36.296 20:12:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:36.296 20:12:18 -- host/auth.sh@68 -- # digest=sha512 00:20:36.296 20:12:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:20:36.296 20:12:18 -- host/auth.sh@68 -- # keyid=4 00:20:36.296 20:12:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:36.296 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.296 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.296 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.296 20:12:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:36.296 20:12:18 -- nvmf/common.sh@717 -- # local ip 00:20:36.296 20:12:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:36.296 20:12:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:36.296 20:12:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.296 20:12:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.296 20:12:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:36.296 20:12:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.296 20:12:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:36.296 20:12:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:36.296 20:12:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:36.296 20:12:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:36.296 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.296 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.296 nvme0n1 00:20:36.296 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.296 20:12:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.296 20:12:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:36.296 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.296 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.296 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.296 20:12:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.296 20:12:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.296 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.296 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.296 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.296 20:12:18 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.296 20:12:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:36.296 20:12:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:36.296 20:12:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:36.296 20:12:18 -- host/auth.sh@44 -- # digest=sha512 00:20:36.296 20:12:18 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:36.296 20:12:18 -- host/auth.sh@44 -- # keyid=0 00:20:36.296 20:12:18 -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:36.296 20:12:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:36.296 20:12:18 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:36.296 20:12:18 -- host/auth.sh@49 -- # echo DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:36.296 20:12:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:20:36.296 20:12:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:36.296 20:12:18 -- host/auth.sh@68 -- # digest=sha512 00:20:36.296 20:12:18 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:36.296 20:12:18 -- host/auth.sh@68 -- # keyid=0 00:20:36.296 20:12:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.296 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.296 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.296 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.296 20:12:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:36.296 20:12:18 -- nvmf/common.sh@717 -- # local ip 00:20:36.296 20:12:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:36.296 20:12:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:36.296 20:12:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.296 20:12:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.296 20:12:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:36.296 20:12:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.296 20:12:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:36.296 20:12:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:36.296 20:12:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:36.296 20:12:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:36.296 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.556 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.556 nvme0n1 00:20:36.556 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.556 20:12:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.556 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.556 20:12:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:36.556 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.556 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.556 20:12:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.556 20:12:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.556 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.556 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.556 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.556 20:12:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:36.556 20:12:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:36.556 20:12:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:36.556 20:12:18 -- host/auth.sh@44 -- # digest=sha512 00:20:36.556 20:12:18 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:36.556 20:12:18 -- host/auth.sh@44 -- # keyid=1 00:20:36.556 20:12:18 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:36.556 20:12:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:36.556 20:12:18 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:36.556 20:12:18 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:36.556 20:12:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:20:36.556 20:12:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:36.556 20:12:18 -- host/auth.sh@68 -- # digest=sha512 00:20:36.556 20:12:18 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:36.556 20:12:18 -- host/auth.sh@68 -- # keyid=1 00:20:36.556 20:12:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.556 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.556 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.556 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.556 20:12:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:36.556 20:12:18 -- nvmf/common.sh@717 -- # local ip 00:20:36.556 20:12:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:36.556 20:12:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:36.556 20:12:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.556 20:12:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.556 20:12:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:36.556 20:12:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.556 20:12:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:36.556 20:12:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:36.556 20:12:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:36.556 20:12:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:36.556 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.556 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.815 nvme0n1 00:20:36.815 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.815 20:12:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.815 20:12:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:36.815 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.815 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.815 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.815 20:12:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.815 20:12:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.815 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.815 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.815 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.815 20:12:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:36.815 20:12:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:36.815 20:12:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:36.815 20:12:18 -- host/auth.sh@44 -- # digest=sha512 00:20:36.815 20:12:18 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:36.815 20:12:18 -- host/auth.sh@44 -- # keyid=2 00:20:36.815 20:12:18 -- host/auth.sh@45 -- # key=DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:36.815 20:12:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:36.815 20:12:18 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:36.815 20:12:18 -- host/auth.sh@49 -- # echo DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:36.815 20:12:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:20:36.815 20:12:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:36.815 20:12:18 -- host/auth.sh@68 -- # digest=sha512 00:20:36.815 20:12:18 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:36.815 20:12:18 -- host/auth.sh@68 -- # keyid=2 00:20:36.815 20:12:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:36.815 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.815 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.815 20:12:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.815 20:12:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:36.815 20:12:18 -- nvmf/common.sh@717 -- # local ip 00:20:36.815 20:12:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:36.815 20:12:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:36.815 20:12:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.815 20:12:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.815 20:12:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:36.815 20:12:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.816 20:12:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:36.816 20:12:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:36.816 20:12:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:36.816 20:12:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:36.816 20:12:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.816 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:20:37.075 nvme0n1 00:20:37.075 20:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.075 20:12:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.075 20:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.075 20:12:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:37.075 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.075 20:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.075 20:12:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.075 20:12:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.075 20:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.075 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.075 20:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.075 20:12:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:37.075 20:12:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:37.075 20:12:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:37.075 20:12:19 -- host/auth.sh@44 -- # digest=sha512 00:20:37.075 20:12:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:37.075 20:12:19 -- host/auth.sh@44 -- # keyid=3 00:20:37.075 20:12:19 -- host/auth.sh@45 -- # key=DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:37.075 20:12:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:37.075 20:12:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:37.075 20:12:19 -- host/auth.sh@49 -- # echo DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:37.075 20:12:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:20:37.075 20:12:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:37.075 20:12:19 -- host/auth.sh@68 -- # digest=sha512 00:20:37.075 20:12:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:37.075 20:12:19 -- host/auth.sh@68 -- # keyid=3 00:20:37.075 20:12:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:37.075 20:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.075 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.075 20:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.075 20:12:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:37.075 20:12:19 -- nvmf/common.sh@717 -- # local ip 00:20:37.075 20:12:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:37.075 20:12:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:37.075 20:12:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.075 20:12:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.075 20:12:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:37.075 20:12:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.075 20:12:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:37.075 20:12:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:37.075 20:12:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:37.075 20:12:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:37.075 20:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.075 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.075 nvme0n1 00:20:37.075 20:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.075 20:12:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.075 20:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.075 20:12:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:37.075 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.075 20:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.075 20:12:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.075 20:12:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.075 20:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.075 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.333 20:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.333 20:12:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:37.333 20:12:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:37.333 20:12:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:37.333 20:12:19 -- host/auth.sh@44 -- # digest=sha512 00:20:37.333 20:12:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:37.333 20:12:19 -- host/auth.sh@44 -- # keyid=4 00:20:37.333 20:12:19 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:37.333 20:12:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:37.333 20:12:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:20:37.333 20:12:19 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:37.333 20:12:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:20:37.333 20:12:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:37.333 20:12:19 -- host/auth.sh@68 -- # digest=sha512 00:20:37.333 20:12:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:20:37.333 20:12:19 -- host/auth.sh@68 -- # keyid=4 00:20:37.333 20:12:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:37.333 20:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.333 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.333 20:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.333 20:12:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:37.333 20:12:19 -- nvmf/common.sh@717 -- # local ip 00:20:37.333 20:12:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:37.333 20:12:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:37.333 20:12:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.333 20:12:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.333 20:12:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:37.333 20:12:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.333 20:12:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:37.333 20:12:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:37.333 20:12:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:37.333 20:12:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:37.333 20:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.333 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.333 nvme0n1 00:20:37.333 20:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.333 20:12:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.333 20:12:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:37.333 20:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.333 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.333 20:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.333 20:12:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.333 20:12:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.334 20:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.334 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.334 20:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.334 20:12:19 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.334 20:12:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:37.334 20:12:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:37.334 20:12:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:37.334 20:12:19 -- host/auth.sh@44 -- # digest=sha512 00:20:37.334 20:12:19 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:37.334 20:12:19 -- host/auth.sh@44 -- # keyid=0 00:20:37.334 20:12:19 -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:37.334 20:12:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:37.334 20:12:19 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:37.334 20:12:19 -- host/auth.sh@49 -- # echo DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:37.334 20:12:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:20:37.334 20:12:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:37.334 20:12:19 -- host/auth.sh@68 -- # digest=sha512 00:20:37.334 20:12:19 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:37.334 20:12:19 -- host/auth.sh@68 -- # keyid=0 00:20:37.334 20:12:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:37.334 20:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.334 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.334 20:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.334 20:12:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:37.334 20:12:19 -- nvmf/common.sh@717 -- # local ip 00:20:37.334 20:12:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:37.334 20:12:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:37.334 20:12:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.334 20:12:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.334 20:12:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:37.334 20:12:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.334 20:12:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:37.334 20:12:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:37.334 20:12:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:37.334 20:12:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:37.334 20:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.334 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.592 nvme0n1 00:20:37.592 20:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.592 20:12:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.592 20:12:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:37.592 20:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.592 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.592 20:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.592 20:12:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.592 20:12:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.592 20:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.592 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.592 20:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.592 20:12:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:37.592 20:12:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:37.592 20:12:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:37.592 20:12:19 -- host/auth.sh@44 -- # digest=sha512 00:20:37.592 20:12:19 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:37.592 20:12:19 -- host/auth.sh@44 -- # keyid=1 00:20:37.592 20:12:19 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:37.592 20:12:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:37.592 20:12:19 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:37.592 20:12:19 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:37.592 20:12:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:20:37.592 20:12:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:37.592 20:12:19 -- host/auth.sh@68 -- # digest=sha512 00:20:37.592 20:12:19 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:37.592 20:12:19 -- host/auth.sh@68 -- # keyid=1 00:20:37.592 20:12:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:37.592 20:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.592 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.592 20:12:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.592 20:12:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:37.592 20:12:19 -- nvmf/common.sh@717 -- # local ip 00:20:37.592 20:12:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:37.592 20:12:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:37.592 20:12:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.592 20:12:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.592 20:12:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:37.592 20:12:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.592 20:12:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:37.592 20:12:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:37.592 20:12:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:37.592 20:12:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:37.592 20:12:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.592 20:12:19 -- common/autotest_common.sh@10 -- # set +x 00:20:37.851 nvme0n1 00:20:37.851 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.851 20:12:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:37.851 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.851 20:12:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:37.851 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:20:37.851 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.851 20:12:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.851 20:12:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.851 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.851 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:20:37.851 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.851 20:12:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:37.851 20:12:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:37.851 20:12:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:37.851 20:12:20 -- host/auth.sh@44 -- # digest=sha512 00:20:37.851 20:12:20 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:37.851 20:12:20 -- host/auth.sh@44 -- # keyid=2 00:20:37.851 20:12:20 -- host/auth.sh@45 -- # key=DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:37.851 20:12:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:37.851 20:12:20 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:37.851 20:12:20 -- host/auth.sh@49 -- # echo DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:37.851 20:12:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:20:37.851 20:12:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:37.851 20:12:20 -- host/auth.sh@68 -- # digest=sha512 00:20:37.851 20:12:20 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:37.851 20:12:20 -- host/auth.sh@68 -- # keyid=2 00:20:37.851 20:12:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:37.851 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.851 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:20:37.851 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.851 20:12:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:37.851 20:12:20 -- nvmf/common.sh@717 -- # local ip 00:20:37.851 20:12:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:37.851 20:12:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:37.851 20:12:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:37.851 20:12:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:37.851 20:12:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:37.851 20:12:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:37.851 20:12:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:37.851 20:12:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:37.851 20:12:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:37.851 20:12:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:37.851 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.851 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:20:38.111 nvme0n1 00:20:38.111 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.111 20:12:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.111 20:12:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:38.111 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.111 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:20:38.111 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.111 20:12:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.111 20:12:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.111 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.111 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:20:38.111 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.111 20:12:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:38.111 20:12:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:38.111 20:12:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:38.111 20:12:20 -- host/auth.sh@44 -- # digest=sha512 00:20:38.111 20:12:20 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:38.111 20:12:20 -- host/auth.sh@44 -- # keyid=3 00:20:38.111 20:12:20 -- host/auth.sh@45 -- # key=DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:38.111 20:12:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:38.111 20:12:20 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:38.111 20:12:20 -- host/auth.sh@49 -- # echo DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:38.111 20:12:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:20:38.111 20:12:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:38.111 20:12:20 -- host/auth.sh@68 -- # digest=sha512 00:20:38.111 20:12:20 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:38.111 20:12:20 -- host/auth.sh@68 -- # keyid=3 00:20:38.111 20:12:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:38.111 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.111 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:20:38.111 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.372 20:12:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:38.372 20:12:20 -- nvmf/common.sh@717 -- # local ip 00:20:38.372 20:12:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:38.372 20:12:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:38.372 20:12:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.372 20:12:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.372 20:12:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:38.372 20:12:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.372 20:12:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:38.372 20:12:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:38.372 20:12:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:38.372 20:12:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:38.372 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.372 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:20:38.372 nvme0n1 00:20:38.372 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.372 20:12:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.372 20:12:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:38.372 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.372 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:20:38.372 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.372 20:12:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.372 20:12:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.372 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.372 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:20:38.372 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.372 20:12:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:38.372 20:12:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:38.372 20:12:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:38.372 20:12:20 -- host/auth.sh@44 -- # digest=sha512 00:20:38.372 20:12:20 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:38.372 20:12:20 -- host/auth.sh@44 -- # keyid=4 00:20:38.372 20:12:20 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:38.372 20:12:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:38.372 20:12:20 -- host/auth.sh@48 -- # echo ffdhe4096 00:20:38.372 20:12:20 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:38.372 20:12:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:20:38.372 20:12:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:38.372 20:12:20 -- host/auth.sh@68 -- # digest=sha512 00:20:38.372 20:12:20 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:20:38.372 20:12:20 -- host/auth.sh@68 -- # keyid=4 00:20:38.372 20:12:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:38.372 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.372 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:20:38.632 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.632 20:12:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:38.632 20:12:20 -- nvmf/common.sh@717 -- # local ip 00:20:38.632 20:12:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:38.632 20:12:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:38.632 20:12:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.632 20:12:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.632 20:12:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:38.632 20:12:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.632 20:12:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:38.632 20:12:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:38.632 20:12:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:38.632 20:12:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:38.632 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.632 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:20:38.632 nvme0n1 00:20:38.632 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.632 20:12:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.632 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.632 20:12:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:38.632 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:20:38.632 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.632 20:12:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.632 20:12:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.632 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.632 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:20:38.632 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.632 20:12:20 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.632 20:12:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:38.632 20:12:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:38.632 20:12:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:38.632 20:12:20 -- host/auth.sh@44 -- # digest=sha512 00:20:38.632 20:12:20 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:38.632 20:12:20 -- host/auth.sh@44 -- # keyid=0 00:20:38.632 20:12:20 -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:38.632 20:12:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:38.632 20:12:20 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:38.632 20:12:20 -- host/auth.sh@49 -- # echo DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:38.632 20:12:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:20:38.632 20:12:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:38.632 20:12:20 -- host/auth.sh@68 -- # digest=sha512 00:20:38.632 20:12:20 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:38.632 20:12:20 -- host/auth.sh@68 -- # keyid=0 00:20:38.632 20:12:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:38.632 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.632 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:20:38.890 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.890 20:12:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:38.890 20:12:20 -- nvmf/common.sh@717 -- # local ip 00:20:38.890 20:12:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:38.890 20:12:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:38.890 20:12:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.890 20:12:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.890 20:12:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:38.890 20:12:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.890 20:12:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:38.890 20:12:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:38.890 20:12:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:38.890 20:12:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:38.890 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.890 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:20:39.147 nvme0n1 00:20:39.147 20:12:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.147 20:12:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.147 20:12:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:39.147 20:12:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.147 20:12:21 -- common/autotest_common.sh@10 -- # set +x 00:20:39.147 20:12:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.147 20:12:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.147 20:12:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.147 20:12:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.147 20:12:21 -- common/autotest_common.sh@10 -- # set +x 00:20:39.147 20:12:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.147 20:12:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:39.147 20:12:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:39.147 20:12:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:39.147 20:12:21 -- host/auth.sh@44 -- # digest=sha512 00:20:39.147 20:12:21 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:39.147 20:12:21 -- host/auth.sh@44 -- # keyid=1 00:20:39.147 20:12:21 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:39.147 20:12:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:39.147 20:12:21 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:39.147 20:12:21 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:39.147 20:12:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:20:39.147 20:12:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:39.147 20:12:21 -- host/auth.sh@68 -- # digest=sha512 00:20:39.147 20:12:21 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:39.147 20:12:21 -- host/auth.sh@68 -- # keyid=1 00:20:39.147 20:12:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:39.147 20:12:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.147 20:12:21 -- common/autotest_common.sh@10 -- # set +x 00:20:39.147 20:12:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.147 20:12:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:39.147 20:12:21 -- nvmf/common.sh@717 -- # local ip 00:20:39.147 20:12:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:39.147 20:12:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:39.147 20:12:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.147 20:12:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.147 20:12:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:39.147 20:12:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.147 20:12:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:39.147 20:12:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:39.147 20:12:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:39.147 20:12:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:39.147 20:12:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.147 20:12:21 -- common/autotest_common.sh@10 -- # set +x 00:20:39.404 nvme0n1 00:20:39.404 20:12:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.404 20:12:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.404 20:12:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.404 20:12:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:39.404 20:12:21 -- common/autotest_common.sh@10 -- # set +x 00:20:39.404 20:12:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.404 20:12:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.404 20:12:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.404 20:12:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.404 20:12:21 -- common/autotest_common.sh@10 -- # set +x 00:20:39.664 20:12:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.664 20:12:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:39.664 20:12:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:39.664 20:12:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:39.664 20:12:21 -- host/auth.sh@44 -- # digest=sha512 00:20:39.664 20:12:21 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:39.664 20:12:21 -- host/auth.sh@44 -- # keyid=2 00:20:39.664 20:12:21 -- host/auth.sh@45 -- # key=DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:39.664 20:12:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:39.664 20:12:21 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:39.664 20:12:21 -- host/auth.sh@49 -- # echo DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:39.664 20:12:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:20:39.664 20:12:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:39.664 20:12:21 -- host/auth.sh@68 -- # digest=sha512 00:20:39.664 20:12:21 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:39.664 20:12:21 -- host/auth.sh@68 -- # keyid=2 00:20:39.664 20:12:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:39.664 20:12:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.664 20:12:21 -- common/autotest_common.sh@10 -- # set +x 00:20:39.664 20:12:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.664 20:12:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:39.664 20:12:21 -- nvmf/common.sh@717 -- # local ip 00:20:39.664 20:12:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:39.664 20:12:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:39.664 20:12:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.664 20:12:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.664 20:12:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:39.664 20:12:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.664 20:12:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:39.664 20:12:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:39.664 20:12:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:39.664 20:12:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:39.664 20:12:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.664 20:12:21 -- common/autotest_common.sh@10 -- # set +x 00:20:39.922 nvme0n1 00:20:39.922 20:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.922 20:12:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.922 20:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.922 20:12:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:39.922 20:12:22 -- common/autotest_common.sh@10 -- # set +x 00:20:39.922 20:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.922 20:12:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.922 20:12:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.922 20:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.922 20:12:22 -- common/autotest_common.sh@10 -- # set +x 00:20:39.922 20:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.922 20:12:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:39.922 20:12:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:39.922 20:12:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:39.922 20:12:22 -- host/auth.sh@44 -- # digest=sha512 00:20:39.922 20:12:22 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:39.922 20:12:22 -- host/auth.sh@44 -- # keyid=3 00:20:39.922 20:12:22 -- host/auth.sh@45 -- # key=DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:39.922 20:12:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:39.923 20:12:22 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:39.923 20:12:22 -- host/auth.sh@49 -- # echo DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:39.923 20:12:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:20:39.923 20:12:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:39.923 20:12:22 -- host/auth.sh@68 -- # digest=sha512 00:20:39.923 20:12:22 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:39.923 20:12:22 -- host/auth.sh@68 -- # keyid=3 00:20:39.923 20:12:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:39.923 20:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.923 20:12:22 -- common/autotest_common.sh@10 -- # set +x 00:20:39.923 20:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.923 20:12:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:39.923 20:12:22 -- nvmf/common.sh@717 -- # local ip 00:20:39.923 20:12:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:39.923 20:12:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:39.923 20:12:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.923 20:12:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.923 20:12:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:39.923 20:12:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.923 20:12:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:39.923 20:12:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:39.923 20:12:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:39.923 20:12:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:39.923 20:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.923 20:12:22 -- common/autotest_common.sh@10 -- # set +x 00:20:40.181 nvme0n1 00:20:40.181 20:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.181 20:12:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:40.181 20:12:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.181 20:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.181 20:12:22 -- common/autotest_common.sh@10 -- # set +x 00:20:40.181 20:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.439 20:12:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.439 20:12:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.439 20:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.439 20:12:22 -- common/autotest_common.sh@10 -- # set +x 00:20:40.439 20:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.439 20:12:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:40.439 20:12:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:40.439 20:12:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:40.439 20:12:22 -- host/auth.sh@44 -- # digest=sha512 00:20:40.439 20:12:22 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:40.439 20:12:22 -- host/auth.sh@44 -- # keyid=4 00:20:40.439 20:12:22 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:40.439 20:12:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:40.439 20:12:22 -- host/auth.sh@48 -- # echo ffdhe6144 00:20:40.439 20:12:22 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:40.439 20:12:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:20:40.439 20:12:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:40.439 20:12:22 -- host/auth.sh@68 -- # digest=sha512 00:20:40.439 20:12:22 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:20:40.439 20:12:22 -- host/auth.sh@68 -- # keyid=4 00:20:40.439 20:12:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:40.439 20:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.439 20:12:22 -- common/autotest_common.sh@10 -- # set +x 00:20:40.439 20:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.439 20:12:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:40.439 20:12:22 -- nvmf/common.sh@717 -- # local ip 00:20:40.439 20:12:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:40.439 20:12:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:40.439 20:12:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.439 20:12:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.439 20:12:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:40.439 20:12:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.439 20:12:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:40.439 20:12:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:40.439 20:12:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:40.439 20:12:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:40.439 20:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.439 20:12:22 -- common/autotest_common.sh@10 -- # set +x 00:20:40.698 nvme0n1 00:20:40.698 20:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.698 20:12:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.698 20:12:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:40.698 20:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.698 20:12:22 -- common/autotest_common.sh@10 -- # set +x 00:20:40.698 20:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.698 20:12:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.698 20:12:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.698 20:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.698 20:12:22 -- common/autotest_common.sh@10 -- # set +x 00:20:40.698 20:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.698 20:12:22 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.698 20:12:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:40.698 20:12:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:40.698 20:12:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:40.698 20:12:22 -- host/auth.sh@44 -- # digest=sha512 00:20:40.698 20:12:22 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:40.698 20:12:22 -- host/auth.sh@44 -- # keyid=0 00:20:40.698 20:12:22 -- host/auth.sh@45 -- # key=DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:40.698 20:12:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:40.698 20:12:22 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:40.698 20:12:22 -- host/auth.sh@49 -- # echo DHHC-1:00:ZjgwN2RjNTZjYTlhNmI4ZmYzY2NmNjM5YTExODA4OTBRrrnb: 00:20:40.698 20:12:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:20:40.698 20:12:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:40.698 20:12:22 -- host/auth.sh@68 -- # digest=sha512 00:20:40.698 20:12:22 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:40.698 20:12:22 -- host/auth.sh@68 -- # keyid=0 00:20:40.698 20:12:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:40.698 20:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.698 20:12:22 -- common/autotest_common.sh@10 -- # set +x 00:20:40.698 20:12:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:40.698 20:12:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:40.698 20:12:22 -- nvmf/common.sh@717 -- # local ip 00:20:40.698 20:12:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:40.698 20:12:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:40.698 20:12:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.698 20:12:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.698 20:12:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:40.698 20:12:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.698 20:12:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:40.698 20:12:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:40.698 20:12:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:40.698 20:12:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:20:40.698 20:12:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:40.698 20:12:22 -- common/autotest_common.sh@10 -- # set +x 00:20:41.268 nvme0n1 00:20:41.268 20:12:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.268 20:12:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.268 20:12:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:41.268 20:12:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.268 20:12:23 -- common/autotest_common.sh@10 -- # set +x 00:20:41.268 20:12:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.268 20:12:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.268 20:12:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.268 20:12:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.268 20:12:23 -- common/autotest_common.sh@10 -- # set +x 00:20:41.268 20:12:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.268 20:12:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:41.268 20:12:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:41.268 20:12:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:41.268 20:12:23 -- host/auth.sh@44 -- # digest=sha512 00:20:41.268 20:12:23 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:41.268 20:12:23 -- host/auth.sh@44 -- # keyid=1 00:20:41.268 20:12:23 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:41.268 20:12:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:41.268 20:12:23 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:41.268 20:12:23 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:41.268 20:12:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:20:41.268 20:12:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:41.268 20:12:23 -- host/auth.sh@68 -- # digest=sha512 00:20:41.268 20:12:23 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:41.268 20:12:23 -- host/auth.sh@68 -- # keyid=1 00:20:41.268 20:12:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:41.268 20:12:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.268 20:12:23 -- common/autotest_common.sh@10 -- # set +x 00:20:41.268 20:12:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.268 20:12:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:41.268 20:12:23 -- nvmf/common.sh@717 -- # local ip 00:20:41.268 20:12:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:41.268 20:12:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:41.268 20:12:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.268 20:12:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.268 20:12:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:41.268 20:12:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.268 20:12:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:41.268 20:12:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:41.268 20:12:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:41.268 20:12:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:20:41.268 20:12:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.268 20:12:23 -- common/autotest_common.sh@10 -- # set +x 00:20:41.835 nvme0n1 00:20:41.835 20:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.835 20:12:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.835 20:12:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:41.835 20:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.835 20:12:24 -- common/autotest_common.sh@10 -- # set +x 00:20:41.835 20:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.835 20:12:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.835 20:12:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.835 20:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.835 20:12:24 -- common/autotest_common.sh@10 -- # set +x 00:20:42.095 20:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.095 20:12:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:42.095 20:12:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:42.095 20:12:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:42.095 20:12:24 -- host/auth.sh@44 -- # digest=sha512 00:20:42.095 20:12:24 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.095 20:12:24 -- host/auth.sh@44 -- # keyid=2 00:20:42.095 20:12:24 -- host/auth.sh@45 -- # key=DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:42.096 20:12:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:42.096 20:12:24 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:42.096 20:12:24 -- host/auth.sh@49 -- # echo DHHC-1:01:OTg0MGVjZDJhODc1NDJmMzRjYmNiNTZiM2U1NTJhOTRQoHGi: 00:20:42.096 20:12:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:20:42.096 20:12:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:42.096 20:12:24 -- host/auth.sh@68 -- # digest=sha512 00:20:42.096 20:12:24 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:42.096 20:12:24 -- host/auth.sh@68 -- # keyid=2 00:20:42.096 20:12:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:42.096 20:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.096 20:12:24 -- common/autotest_common.sh@10 -- # set +x 00:20:42.096 20:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.096 20:12:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:42.096 20:12:24 -- nvmf/common.sh@717 -- # local ip 00:20:42.096 20:12:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:42.096 20:12:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:42.096 20:12:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.096 20:12:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.096 20:12:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:42.096 20:12:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.096 20:12:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:42.096 20:12:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:42.096 20:12:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:42.096 20:12:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:42.096 20:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.096 20:12:24 -- common/autotest_common.sh@10 -- # set +x 00:20:42.666 nvme0n1 00:20:42.666 20:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.666 20:12:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.666 20:12:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:42.666 20:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.666 20:12:24 -- common/autotest_common.sh@10 -- # set +x 00:20:42.666 20:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.666 20:12:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.666 20:12:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.666 20:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.666 20:12:24 -- common/autotest_common.sh@10 -- # set +x 00:20:42.666 20:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.666 20:12:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:42.666 20:12:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:42.666 20:12:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:42.666 20:12:24 -- host/auth.sh@44 -- # digest=sha512 00:20:42.666 20:12:24 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:42.666 20:12:24 -- host/auth.sh@44 -- # keyid=3 00:20:42.666 20:12:24 -- host/auth.sh@45 -- # key=DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:42.666 20:12:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:42.666 20:12:24 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:42.666 20:12:24 -- host/auth.sh@49 -- # echo DHHC-1:02:NTNmODQ0Nzc3YTBlYzE1NTk0ZjdkNDRjOTgxMTMzZjYwOTg0MmY1OGQ2NmU4NDNkb6nLdg==: 00:20:42.666 20:12:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:20:42.666 20:12:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:42.666 20:12:24 -- host/auth.sh@68 -- # digest=sha512 00:20:42.666 20:12:24 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:42.666 20:12:24 -- host/auth.sh@68 -- # keyid=3 00:20:42.666 20:12:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:42.666 20:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.666 20:12:24 -- common/autotest_common.sh@10 -- # set +x 00:20:42.666 20:12:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.666 20:12:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:42.666 20:12:24 -- nvmf/common.sh@717 -- # local ip 00:20:42.666 20:12:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:42.666 20:12:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:42.666 20:12:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.666 20:12:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.666 20:12:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:42.666 20:12:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.666 20:12:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:42.666 20:12:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:42.666 20:12:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:42.666 20:12:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:20:42.666 20:12:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.666 20:12:24 -- common/autotest_common.sh@10 -- # set +x 00:20:43.233 nvme0n1 00:20:43.233 20:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.233 20:12:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.233 20:12:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:43.233 20:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.233 20:12:25 -- common/autotest_common.sh@10 -- # set +x 00:20:43.233 20:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.233 20:12:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.233 20:12:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.233 20:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.233 20:12:25 -- common/autotest_common.sh@10 -- # set +x 00:20:43.233 20:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.233 20:12:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:20:43.233 20:12:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:43.233 20:12:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:43.233 20:12:25 -- host/auth.sh@44 -- # digest=sha512 00:20:43.233 20:12:25 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:43.233 20:12:25 -- host/auth.sh@44 -- # keyid=4 00:20:43.233 20:12:25 -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:43.233 20:12:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:20:43.233 20:12:25 -- host/auth.sh@48 -- # echo ffdhe8192 00:20:43.233 20:12:25 -- host/auth.sh@49 -- # echo DHHC-1:03:NzM2MGI1N2QxOWNhYmYxMTk2OTUzMTBlYWFmMjU3OWViZGZlNjY4MDNiOWRkOTNkODk2ZTI5MWU2OTJiOTllNNfKkCI=: 00:20:43.233 20:12:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:20:43.233 20:12:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:20:43.233 20:12:25 -- host/auth.sh@68 -- # digest=sha512 00:20:43.233 20:12:25 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:20:43.233 20:12:25 -- host/auth.sh@68 -- # keyid=4 00:20:43.233 20:12:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:43.233 20:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.233 20:12:25 -- common/autotest_common.sh@10 -- # set +x 00:20:43.233 20:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.233 20:12:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:20:43.233 20:12:25 -- nvmf/common.sh@717 -- # local ip 00:20:43.233 20:12:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:43.233 20:12:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:43.233 20:12:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.233 20:12:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.233 20:12:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:43.233 20:12:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.233 20:12:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:43.233 20:12:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:43.233 20:12:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:43.233 20:12:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:43.233 20:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.233 20:12:25 -- common/autotest_common.sh@10 -- # set +x 00:20:43.802 nvme0n1 00:20:43.802 20:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.802 20:12:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.802 20:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.802 20:12:25 -- common/autotest_common.sh@10 -- # set +x 00:20:43.802 20:12:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:20:43.802 20:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.802 20:12:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.802 20:12:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.802 20:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.802 20:12:25 -- common/autotest_common.sh@10 -- # set +x 00:20:43.802 20:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.802 20:12:25 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:43.802 20:12:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:20:43.802 20:12:25 -- host/auth.sh@44 -- # digest=sha256 00:20:43.802 20:12:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:43.802 20:12:25 -- host/auth.sh@44 -- # keyid=1 00:20:43.802 20:12:25 -- host/auth.sh@45 -- # key=DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:43.802 20:12:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:20:43.802 20:12:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:20:43.802 20:12:25 -- host/auth.sh@49 -- # echo DHHC-1:00:MmY5OGU1MWU4MjE5NWYxZTZkZjljNmMxYjQ0NzNhYjc0MmU1YTM2ODkxZWJhMGQzSJbCLA==: 00:20:43.802 20:12:25 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:43.802 20:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.802 20:12:25 -- common/autotest_common.sh@10 -- # set +x 00:20:43.802 20:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.802 20:12:25 -- host/auth.sh@119 -- # get_main_ns_ip 00:20:43.802 20:12:25 -- nvmf/common.sh@717 -- # local ip 00:20:43.802 20:12:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:43.802 20:12:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:43.802 20:12:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.802 20:12:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.802 20:12:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:43.802 20:12:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.802 20:12:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:43.802 20:12:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:43.802 20:12:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:43.802 20:12:25 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:43.802 20:12:25 -- common/autotest_common.sh@638 -- # local es=0 00:20:43.802 20:12:25 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:43.802 20:12:25 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:43.802 20:12:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:43.802 20:12:25 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:43.802 20:12:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:43.802 20:12:25 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:43.802 20:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.803 20:12:25 -- common/autotest_common.sh@10 -- # set +x 00:20:43.803 request: 00:20:43.803 { 00:20:43.803 "name": "nvme0", 00:20:43.803 "trtype": "tcp", 00:20:43.803 "traddr": "10.0.0.1", 00:20:43.803 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:43.803 "adrfam": "ipv4", 00:20:43.803 "trsvcid": "4420", 00:20:43.803 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:43.803 "method": "bdev_nvme_attach_controller", 00:20:43.803 "req_id": 1 00:20:43.803 } 00:20:43.803 Got JSON-RPC error response 00:20:43.803 response: 00:20:43.803 { 00:20:43.803 "code": -32602, 00:20:43.803 "message": "Invalid parameters" 00:20:43.803 } 00:20:43.803 20:12:25 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:43.803 20:12:25 -- common/autotest_common.sh@641 -- # es=1 00:20:43.803 20:12:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:43.803 20:12:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:43.803 20:12:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:43.803 20:12:25 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.803 20:12:25 -- host/auth.sh@121 -- # jq length 00:20:43.803 20:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.803 20:12:25 -- common/autotest_common.sh@10 -- # set +x 00:20:43.803 20:12:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.803 20:12:25 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:20:43.803 20:12:25 -- host/auth.sh@124 -- # get_main_ns_ip 00:20:43.803 20:12:25 -- nvmf/common.sh@717 -- # local ip 00:20:43.803 20:12:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:43.803 20:12:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:43.803 20:12:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.803 20:12:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.803 20:12:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:43.803 20:12:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.803 20:12:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:43.803 20:12:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:43.803 20:12:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:43.803 20:12:25 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:43.803 20:12:25 -- common/autotest_common.sh@638 -- # local es=0 00:20:43.803 20:12:25 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:43.803 20:12:25 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:43.803 20:12:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:43.803 20:12:25 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:43.803 20:12:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:43.803 20:12:25 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:43.803 20:12:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.803 20:12:26 -- common/autotest_common.sh@10 -- # set +x 00:20:43.803 request: 00:20:43.803 { 00:20:43.803 "name": "nvme0", 00:20:43.803 "trtype": "tcp", 00:20:43.803 "traddr": "10.0.0.1", 00:20:43.803 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:43.803 "adrfam": "ipv4", 00:20:43.803 "trsvcid": "4420", 00:20:43.803 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:43.803 "dhchap_key": "key2", 00:20:43.803 "method": "bdev_nvme_attach_controller", 00:20:43.803 "req_id": 1 00:20:43.803 } 00:20:43.803 Got JSON-RPC error response 00:20:43.803 response: 00:20:43.803 { 00:20:43.803 "code": -32602, 00:20:43.803 "message": "Invalid parameters" 00:20:43.803 } 00:20:43.803 20:12:26 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:43.803 20:12:26 -- common/autotest_common.sh@641 -- # es=1 00:20:43.803 20:12:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:43.803 20:12:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:43.803 20:12:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:43.803 20:12:26 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.803 20:12:26 -- host/auth.sh@127 -- # jq length 00:20:43.803 20:12:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.803 20:12:26 -- common/autotest_common.sh@10 -- # set +x 00:20:43.803 20:12:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:44.062 20:12:26 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:20:44.062 20:12:26 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:20:44.062 20:12:26 -- host/auth.sh@130 -- # cleanup 00:20:44.062 20:12:26 -- host/auth.sh@24 -- # nvmftestfini 00:20:44.062 20:12:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:44.062 20:12:26 -- nvmf/common.sh@117 -- # sync 00:20:44.062 20:12:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:44.062 20:12:26 -- nvmf/common.sh@120 -- # set +e 00:20:44.062 20:12:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:44.062 20:12:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:44.062 rmmod nvme_tcp 00:20:44.062 rmmod nvme_fabrics 00:20:44.062 20:12:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:44.062 20:12:26 -- nvmf/common.sh@124 -- # set -e 00:20:44.062 20:12:26 -- nvmf/common.sh@125 -- # return 0 00:20:44.062 20:12:26 -- nvmf/common.sh@478 -- # '[' -n 74603 ']' 00:20:44.062 20:12:26 -- nvmf/common.sh@479 -- # killprocess 74603 00:20:44.062 20:12:26 -- common/autotest_common.sh@936 -- # '[' -z 74603 ']' 00:20:44.062 20:12:26 -- common/autotest_common.sh@940 -- # kill -0 74603 00:20:44.062 20:12:26 -- common/autotest_common.sh@941 -- # uname 00:20:44.062 20:12:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:44.062 20:12:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74603 00:20:44.062 killing process with pid 74603 00:20:44.062 20:12:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:44.062 20:12:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:44.062 20:12:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74603' 00:20:44.062 20:12:26 -- common/autotest_common.sh@955 -- # kill 74603 00:20:44.062 20:12:26 -- common/autotest_common.sh@960 -- # wait 74603 00:20:44.321 20:12:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:44.321 20:12:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:44.321 20:12:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:44.321 20:12:26 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:44.321 20:12:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:44.321 20:12:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.321 20:12:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:44.321 20:12:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.321 20:12:26 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:44.321 20:12:26 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:44.321 20:12:26 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:44.321 20:12:26 -- host/auth.sh@27 -- # clean_kernel_target 00:20:44.321 20:12:26 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:44.321 20:12:26 -- nvmf/common.sh@675 -- # echo 0 00:20:44.321 20:12:26 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:44.321 20:12:26 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:44.321 20:12:26 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:44.321 20:12:26 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:44.321 20:12:26 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:20:44.321 20:12:26 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:20:44.321 20:12:26 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:45.256 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:45.256 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:45.256 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:45.256 20:12:27 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.0Ie /tmp/spdk.key-null.nBK /tmp/spdk.key-sha256.5WU /tmp/spdk.key-sha384.b0h /tmp/spdk.key-sha512.xDO /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:45.256 20:12:27 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:45.854 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:45.854 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:45.854 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:45.854 00:20:45.854 real 0m35.850s 00:20:45.854 user 0m32.854s 00:20:45.854 sys 0m4.356s 00:20:45.854 20:12:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:45.854 20:12:28 -- common/autotest_common.sh@10 -- # set +x 00:20:45.854 ************************************ 00:20:45.854 END TEST nvmf_auth 00:20:45.854 ************************************ 00:20:45.854 20:12:28 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:20:45.854 20:12:28 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:45.854 20:12:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:45.854 20:12:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:45.854 20:12:28 -- common/autotest_common.sh@10 -- # set +x 00:20:46.113 ************************************ 00:20:46.113 START TEST nvmf_digest 00:20:46.113 ************************************ 00:20:46.113 20:12:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:46.113 * Looking for test storage... 00:20:46.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:46.113 20:12:28 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:46.113 20:12:28 -- nvmf/common.sh@7 -- # uname -s 00:20:46.113 20:12:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.113 20:12:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.113 20:12:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.113 20:12:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.113 20:12:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.113 20:12:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.113 20:12:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.113 20:12:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.113 20:12:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.113 20:12:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.113 20:12:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:20:46.113 20:12:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:20:46.113 20:12:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.113 20:12:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.113 20:12:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:46.113 20:12:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.113 20:12:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:46.113 20:12:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.113 20:12:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.113 20:12:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.113 20:12:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.113 20:12:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.113 20:12:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.113 20:12:28 -- paths/export.sh@5 -- # export PATH 00:20:46.113 20:12:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.113 20:12:28 -- nvmf/common.sh@47 -- # : 0 00:20:46.113 20:12:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:46.113 20:12:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:46.113 20:12:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.113 20:12:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.113 20:12:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.113 20:12:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:46.113 20:12:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:46.113 20:12:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:46.113 20:12:28 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:46.113 20:12:28 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:46.113 20:12:28 -- host/digest.sh@16 -- # runtime=2 00:20:46.113 20:12:28 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:46.113 20:12:28 -- host/digest.sh@138 -- # nvmftestinit 00:20:46.113 20:12:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:46.113 20:12:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.113 20:12:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:46.113 20:12:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:46.113 20:12:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:46.113 20:12:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.113 20:12:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.113 20:12:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.113 20:12:28 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:46.113 20:12:28 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:46.113 20:12:28 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:46.113 20:12:28 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:46.113 20:12:28 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:46.113 20:12:28 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:46.114 20:12:28 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.114 20:12:28 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.114 20:12:28 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:46.114 20:12:28 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:46.114 20:12:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:46.114 20:12:28 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:46.114 20:12:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:46.114 20:12:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.114 20:12:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:46.114 20:12:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:46.114 20:12:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:46.114 20:12:28 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:46.114 20:12:28 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:46.114 20:12:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:46.372 Cannot find device "nvmf_tgt_br" 00:20:46.372 20:12:28 -- nvmf/common.sh@155 -- # true 00:20:46.372 20:12:28 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:46.372 Cannot find device "nvmf_tgt_br2" 00:20:46.372 20:12:28 -- nvmf/common.sh@156 -- # true 00:20:46.372 20:12:28 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:46.372 20:12:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:46.372 Cannot find device "nvmf_tgt_br" 00:20:46.372 20:12:28 -- nvmf/common.sh@158 -- # true 00:20:46.372 20:12:28 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:46.372 Cannot find device "nvmf_tgt_br2" 00:20:46.372 20:12:28 -- nvmf/common.sh@159 -- # true 00:20:46.372 20:12:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:46.372 20:12:28 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:46.372 20:12:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:46.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.372 20:12:28 -- nvmf/common.sh@162 -- # true 00:20:46.372 20:12:28 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:46.372 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.372 20:12:28 -- nvmf/common.sh@163 -- # true 00:20:46.372 20:12:28 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:46.372 20:12:28 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:46.372 20:12:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:46.372 20:12:28 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:46.373 20:12:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:46.373 20:12:28 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:46.373 20:12:28 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:46.373 20:12:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:46.373 20:12:28 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:46.373 20:12:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:46.373 20:12:28 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:46.373 20:12:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:46.373 20:12:28 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:46.373 20:12:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:46.373 20:12:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:46.373 20:12:28 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:46.373 20:12:28 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:46.373 20:12:28 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:46.631 20:12:28 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:46.631 20:12:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:46.631 20:12:28 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:46.631 20:12:28 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:46.631 20:12:28 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:46.631 20:12:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:46.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:20:46.631 00:20:46.631 --- 10.0.0.2 ping statistics --- 00:20:46.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.631 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:20:46.631 20:12:28 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:46.631 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:46.631 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:20:46.631 00:20:46.631 --- 10.0.0.3 ping statistics --- 00:20:46.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.631 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:20:46.631 20:12:28 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:46.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:20:46.631 00:20:46.631 --- 10.0.0.1 ping statistics --- 00:20:46.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.631 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:46.631 20:12:28 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.631 20:12:28 -- nvmf/common.sh@422 -- # return 0 00:20:46.631 20:12:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:46.631 20:12:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.631 20:12:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:46.631 20:12:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:46.631 20:12:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.631 20:12:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:46.631 20:12:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:46.631 20:12:28 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:46.631 20:12:28 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:46.631 20:12:28 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:46.631 20:12:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:46.631 20:12:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:46.631 20:12:28 -- common/autotest_common.sh@10 -- # set +x 00:20:46.631 ************************************ 00:20:46.631 START TEST nvmf_digest_clean 00:20:46.631 ************************************ 00:20:46.631 20:12:28 -- common/autotest_common.sh@1111 -- # run_digest 00:20:46.631 20:12:28 -- host/digest.sh@120 -- # local dsa_initiator 00:20:46.631 20:12:28 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:46.631 20:12:28 -- host/digest.sh@121 -- # dsa_initiator=false 00:20:46.631 20:12:28 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:46.631 20:12:28 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:46.631 20:12:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:46.631 20:12:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:46.631 20:12:28 -- common/autotest_common.sh@10 -- # set +x 00:20:46.631 20:12:28 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:46.631 20:12:28 -- nvmf/common.sh@470 -- # nvmfpid=76191 00:20:46.631 20:12:28 -- nvmf/common.sh@471 -- # waitforlisten 76191 00:20:46.631 20:12:28 -- common/autotest_common.sh@817 -- # '[' -z 76191 ']' 00:20:46.631 20:12:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.631 20:12:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:46.631 20:12:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.631 20:12:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:46.631 20:12:28 -- common/autotest_common.sh@10 -- # set +x 00:20:46.631 [2024-04-24 20:12:28.837216] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:20:46.632 [2024-04-24 20:12:28.837360] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.889 [2024-04-24 20:12:28.960659] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.889 [2024-04-24 20:12:29.060385] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.889 [2024-04-24 20:12:29.060521] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.889 [2024-04-24 20:12:29.060600] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.889 [2024-04-24 20:12:29.060630] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.889 [2024-04-24 20:12:29.060648] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.889 [2024-04-24 20:12:29.060693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.825 20:12:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:47.825 20:12:29 -- common/autotest_common.sh@850 -- # return 0 00:20:47.825 20:12:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:47.825 20:12:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:47.825 20:12:29 -- common/autotest_common.sh@10 -- # set +x 00:20:47.825 20:12:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.825 20:12:29 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:47.825 20:12:29 -- host/digest.sh@126 -- # common_target_config 00:20:47.825 20:12:29 -- host/digest.sh@43 -- # rpc_cmd 00:20:47.825 20:12:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:47.825 20:12:29 -- common/autotest_common.sh@10 -- # set +x 00:20:47.825 null0 00:20:47.825 [2024-04-24 20:12:29.926359] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.825 [2024-04-24 20:12:29.950247] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:47.825 [2024-04-24 20:12:29.950471] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.825 20:12:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:47.825 20:12:29 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:47.825 20:12:29 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:47.825 20:12:29 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:47.825 20:12:29 -- host/digest.sh@80 -- # rw=randread 00:20:47.825 20:12:29 -- host/digest.sh@80 -- # bs=4096 00:20:47.825 20:12:29 -- host/digest.sh@80 -- # qd=128 00:20:47.825 20:12:29 -- host/digest.sh@80 -- # scan_dsa=false 00:20:47.825 20:12:29 -- host/digest.sh@83 -- # bperfpid=76223 00:20:47.825 20:12:29 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:47.825 20:12:29 -- host/digest.sh@84 -- # waitforlisten 76223 /var/tmp/bperf.sock 00:20:47.825 20:12:29 -- common/autotest_common.sh@817 -- # '[' -z 76223 ']' 00:20:47.825 20:12:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:47.825 20:12:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:47.825 20:12:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:47.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:47.825 20:12:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:47.825 20:12:29 -- common/autotest_common.sh@10 -- # set +x 00:20:47.825 [2024-04-24 20:12:30.009302] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:20:47.825 [2024-04-24 20:12:30.009466] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76223 ] 00:20:48.083 [2024-04-24 20:12:30.146579] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.083 [2024-04-24 20:12:30.245149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.647 20:12:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:48.647 20:12:30 -- common/autotest_common.sh@850 -- # return 0 00:20:48.647 20:12:30 -- host/digest.sh@86 -- # false 00:20:48.647 20:12:30 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:48.647 20:12:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:48.906 20:12:31 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:48.906 20:12:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:49.164 nvme0n1 00:20:49.164 20:12:31 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:49.164 20:12:31 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:49.422 Running I/O for 2 seconds... 00:20:51.325 00:20:51.325 Latency(us) 00:20:51.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.325 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:51.325 nvme0n1 : 2.01 16557.21 64.68 0.00 0.00 7725.19 6839.78 20147.31 00:20:51.325 =================================================================================================================== 00:20:51.325 Total : 16557.21 64.68 0.00 0.00 7725.19 6839.78 20147.31 00:20:51.325 0 00:20:51.325 20:12:33 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:51.325 20:12:33 -- host/digest.sh@93 -- # get_accel_stats 00:20:51.325 20:12:33 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:51.325 20:12:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:51.325 20:12:33 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:51.325 | select(.opcode=="crc32c") 00:20:51.325 | "\(.module_name) \(.executed)"' 00:20:51.585 20:12:33 -- host/digest.sh@94 -- # false 00:20:51.585 20:12:33 -- host/digest.sh@94 -- # exp_module=software 00:20:51.585 20:12:33 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:51.585 20:12:33 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:51.585 20:12:33 -- host/digest.sh@98 -- # killprocess 76223 00:20:51.585 20:12:33 -- common/autotest_common.sh@936 -- # '[' -z 76223 ']' 00:20:51.585 20:12:33 -- common/autotest_common.sh@940 -- # kill -0 76223 00:20:51.585 20:12:33 -- common/autotest_common.sh@941 -- # uname 00:20:51.585 20:12:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:51.585 20:12:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76223 00:20:51.585 killing process with pid 76223 00:20:51.585 Received shutdown signal, test time was about 2.000000 seconds 00:20:51.585 00:20:51.585 Latency(us) 00:20:51.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.585 =================================================================================================================== 00:20:51.585 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.585 20:12:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:51.585 20:12:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:51.585 20:12:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76223' 00:20:51.585 20:12:33 -- common/autotest_common.sh@955 -- # kill 76223 00:20:51.585 20:12:33 -- common/autotest_common.sh@960 -- # wait 76223 00:20:51.845 20:12:33 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:51.845 20:12:33 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:51.845 20:12:33 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:51.845 20:12:33 -- host/digest.sh@80 -- # rw=randread 00:20:51.845 20:12:33 -- host/digest.sh@80 -- # bs=131072 00:20:51.845 20:12:33 -- host/digest.sh@80 -- # qd=16 00:20:51.845 20:12:33 -- host/digest.sh@80 -- # scan_dsa=false 00:20:51.845 20:12:33 -- host/digest.sh@83 -- # bperfpid=76284 00:20:51.845 20:12:34 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:51.845 20:12:34 -- host/digest.sh@84 -- # waitforlisten 76284 /var/tmp/bperf.sock 00:20:51.845 20:12:34 -- common/autotest_common.sh@817 -- # '[' -z 76284 ']' 00:20:51.845 20:12:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:51.845 20:12:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:51.845 20:12:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:51.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:51.845 20:12:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:51.845 20:12:34 -- common/autotest_common.sh@10 -- # set +x 00:20:51.845 [2024-04-24 20:12:34.049506] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:20:51.845 [2024-04-24 20:12:34.049643] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76284 ] 00:20:51.845 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:51.845 Zero copy mechanism will not be used. 00:20:52.104 [2024-04-24 20:12:34.189691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.104 [2024-04-24 20:12:34.294241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.673 20:12:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:52.673 20:12:34 -- common/autotest_common.sh@850 -- # return 0 00:20:52.673 20:12:34 -- host/digest.sh@86 -- # false 00:20:52.673 20:12:34 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:52.673 20:12:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:52.933 20:12:35 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:52.933 20:12:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:53.499 nvme0n1 00:20:53.499 20:12:35 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:53.499 20:12:35 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:53.499 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:53.499 Zero copy mechanism will not be used. 00:20:53.499 Running I/O for 2 seconds... 00:20:55.396 00:20:55.396 Latency(us) 00:20:55.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.396 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:55.396 nvme0n1 : 2.00 8152.65 1019.08 0.00 0.00 1959.93 1767.18 4664.79 00:20:55.396 =================================================================================================================== 00:20:55.396 Total : 8152.65 1019.08 0.00 0.00 1959.93 1767.18 4664.79 00:20:55.396 0 00:20:55.396 20:12:37 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:55.396 20:12:37 -- host/digest.sh@93 -- # get_accel_stats 00:20:55.396 20:12:37 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:55.396 20:12:37 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:55.396 | select(.opcode=="crc32c") 00:20:55.396 | "\(.module_name) \(.executed)"' 00:20:55.396 20:12:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:55.653 20:12:37 -- host/digest.sh@94 -- # false 00:20:55.653 20:12:37 -- host/digest.sh@94 -- # exp_module=software 00:20:55.653 20:12:37 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:55.653 20:12:37 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:55.653 20:12:37 -- host/digest.sh@98 -- # killprocess 76284 00:20:55.653 20:12:37 -- common/autotest_common.sh@936 -- # '[' -z 76284 ']' 00:20:55.653 20:12:37 -- common/autotest_common.sh@940 -- # kill -0 76284 00:20:55.653 20:12:37 -- common/autotest_common.sh@941 -- # uname 00:20:55.653 20:12:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:55.653 20:12:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76284 00:20:55.653 20:12:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:55.653 killing process with pid 76284 00:20:55.653 Received shutdown signal, test time was about 2.000000 seconds 00:20:55.653 00:20:55.653 Latency(us) 00:20:55.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.653 =================================================================================================================== 00:20:55.653 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.653 20:12:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:55.653 20:12:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76284' 00:20:55.653 20:12:37 -- common/autotest_common.sh@955 -- # kill 76284 00:20:55.653 20:12:37 -- common/autotest_common.sh@960 -- # wait 76284 00:20:55.910 20:12:38 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:55.910 20:12:38 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:55.910 20:12:38 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:55.910 20:12:38 -- host/digest.sh@80 -- # rw=randwrite 00:20:55.910 20:12:38 -- host/digest.sh@80 -- # bs=4096 00:20:55.910 20:12:38 -- host/digest.sh@80 -- # qd=128 00:20:55.910 20:12:38 -- host/digest.sh@80 -- # scan_dsa=false 00:20:55.910 20:12:38 -- host/digest.sh@83 -- # bperfpid=76343 00:20:55.910 20:12:38 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:55.910 20:12:38 -- host/digest.sh@84 -- # waitforlisten 76343 /var/tmp/bperf.sock 00:20:55.910 20:12:38 -- common/autotest_common.sh@817 -- # '[' -z 76343 ']' 00:20:55.910 20:12:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:55.910 20:12:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:55.910 20:12:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:55.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:55.910 20:12:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:55.910 20:12:38 -- common/autotest_common.sh@10 -- # set +x 00:20:55.910 [2024-04-24 20:12:38.161996] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:20:55.910 [2024-04-24 20:12:38.162138] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76343 ] 00:20:56.168 [2024-04-24 20:12:38.302301] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.168 [2024-04-24 20:12:38.397362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.102 20:12:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:57.102 20:12:38 -- common/autotest_common.sh@850 -- # return 0 00:20:57.102 20:12:38 -- host/digest.sh@86 -- # false 00:20:57.102 20:12:38 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:57.102 20:12:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:57.102 20:12:39 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:57.102 20:12:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:57.361 nvme0n1 00:20:57.361 20:12:39 -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:57.361 20:12:39 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:57.620 Running I/O for 2 seconds... 00:20:59.522 00:20:59.522 Latency(us) 00:20:59.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.522 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:59.522 nvme0n1 : 2.01 18745.70 73.23 0.00 0.00 6822.62 3863.48 14022.99 00:20:59.522 =================================================================================================================== 00:20:59.522 Total : 18745.70 73.23 0.00 0.00 6822.62 3863.48 14022.99 00:20:59.522 0 00:20:59.522 20:12:41 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:59.522 20:12:41 -- host/digest.sh@93 -- # get_accel_stats 00:20:59.522 20:12:41 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:59.522 | select(.opcode=="crc32c") 00:20:59.522 | "\(.module_name) \(.executed)"' 00:20:59.522 20:12:41 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:59.522 20:12:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:59.787 20:12:41 -- host/digest.sh@94 -- # false 00:20:59.787 20:12:41 -- host/digest.sh@94 -- # exp_module=software 00:20:59.787 20:12:41 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:59.787 20:12:41 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:59.787 20:12:41 -- host/digest.sh@98 -- # killprocess 76343 00:20:59.787 20:12:41 -- common/autotest_common.sh@936 -- # '[' -z 76343 ']' 00:20:59.787 20:12:41 -- common/autotest_common.sh@940 -- # kill -0 76343 00:20:59.787 20:12:41 -- common/autotest_common.sh@941 -- # uname 00:20:59.787 20:12:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:59.787 20:12:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76343 00:20:59.787 killing process with pid 76343 00:20:59.787 Received shutdown signal, test time was about 2.000000 seconds 00:20:59.787 00:20:59.787 Latency(us) 00:20:59.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.787 =================================================================================================================== 00:20:59.787 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:59.787 20:12:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:59.787 20:12:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:59.787 20:12:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76343' 00:20:59.787 20:12:41 -- common/autotest_common.sh@955 -- # kill 76343 00:20:59.787 20:12:41 -- common/autotest_common.sh@960 -- # wait 76343 00:21:00.057 20:12:42 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:00.057 20:12:42 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:00.057 20:12:42 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:00.057 20:12:42 -- host/digest.sh@80 -- # rw=randwrite 00:21:00.057 20:12:42 -- host/digest.sh@80 -- # bs=131072 00:21:00.057 20:12:42 -- host/digest.sh@80 -- # qd=16 00:21:00.057 20:12:42 -- host/digest.sh@80 -- # scan_dsa=false 00:21:00.057 20:12:42 -- host/digest.sh@83 -- # bperfpid=76399 00:21:00.057 20:12:42 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:00.057 20:12:42 -- host/digest.sh@84 -- # waitforlisten 76399 /var/tmp/bperf.sock 00:21:00.057 20:12:42 -- common/autotest_common.sh@817 -- # '[' -z 76399 ']' 00:21:00.057 20:12:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:00.057 20:12:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:00.057 20:12:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:00.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:00.057 20:12:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:00.057 20:12:42 -- common/autotest_common.sh@10 -- # set +x 00:21:00.057 [2024-04-24 20:12:42.204566] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:21:00.057 [2024-04-24 20:12:42.204721] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:21:00.057 Zero copy mechanism will not be used. 00:21:00.057 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76399 ] 00:21:00.320 [2024-04-24 20:12:42.343734] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.320 [2024-04-24 20:12:42.446972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.887 20:12:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:00.887 20:12:43 -- common/autotest_common.sh@850 -- # return 0 00:21:00.887 20:12:43 -- host/digest.sh@86 -- # false 00:21:00.887 20:12:43 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:00.887 20:12:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:01.145 20:12:43 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:01.145 20:12:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:01.404 nvme0n1 00:21:01.404 20:12:43 -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:01.404 20:12:43 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:01.662 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:01.662 Zero copy mechanism will not be used. 00:21:01.662 Running I/O for 2 seconds... 00:21:03.566 00:21:03.566 Latency(us) 00:21:03.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.566 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:03.566 nvme0n1 : 2.00 7817.71 977.21 0.00 0.00 2043.05 1266.36 6868.40 00:21:03.566 =================================================================================================================== 00:21:03.566 Total : 7817.71 977.21 0.00 0.00 2043.05 1266.36 6868.40 00:21:03.566 0 00:21:03.566 20:12:45 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:03.566 20:12:45 -- host/digest.sh@93 -- # get_accel_stats 00:21:03.566 20:12:45 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:03.566 20:12:45 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:03.566 | select(.opcode=="crc32c") 00:21:03.566 | "\(.module_name) \(.executed)"' 00:21:03.566 20:12:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:03.825 20:12:45 -- host/digest.sh@94 -- # false 00:21:03.825 20:12:45 -- host/digest.sh@94 -- # exp_module=software 00:21:03.825 20:12:45 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:03.825 20:12:45 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:03.825 20:12:45 -- host/digest.sh@98 -- # killprocess 76399 00:21:03.825 20:12:45 -- common/autotest_common.sh@936 -- # '[' -z 76399 ']' 00:21:03.825 20:12:45 -- common/autotest_common.sh@940 -- # kill -0 76399 00:21:03.825 20:12:45 -- common/autotest_common.sh@941 -- # uname 00:21:03.825 20:12:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:03.825 20:12:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76399 00:21:03.825 killing process with pid 76399 00:21:03.825 20:12:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:03.825 20:12:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:03.825 20:12:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76399' 00:21:03.825 20:12:45 -- common/autotest_common.sh@955 -- # kill 76399 00:21:03.825 Received shutdown signal, test time was about 2.000000 seconds 00:21:03.825 00:21:03.825 Latency(us) 00:21:03.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.825 =================================================================================================================== 00:21:03.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:03.825 20:12:45 -- common/autotest_common.sh@960 -- # wait 76399 00:21:04.084 20:12:46 -- host/digest.sh@132 -- # killprocess 76191 00:21:04.084 20:12:46 -- common/autotest_common.sh@936 -- # '[' -z 76191 ']' 00:21:04.084 20:12:46 -- common/autotest_common.sh@940 -- # kill -0 76191 00:21:04.084 20:12:46 -- common/autotest_common.sh@941 -- # uname 00:21:04.084 20:12:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:04.084 20:12:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76191 00:21:04.084 killing process with pid 76191 00:21:04.084 20:12:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:04.084 20:12:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:04.084 20:12:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76191' 00:21:04.084 20:12:46 -- common/autotest_common.sh@955 -- # kill 76191 00:21:04.084 [2024-04-24 20:12:46.211517] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:04.084 20:12:46 -- common/autotest_common.sh@960 -- # wait 76191 00:21:04.343 00:21:04.343 real 0m17.644s 00:21:04.343 user 0m33.662s 00:21:04.343 sys 0m4.429s 00:21:04.343 ************************************ 00:21:04.343 END TEST nvmf_digest_clean 00:21:04.343 ************************************ 00:21:04.343 20:12:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:04.343 20:12:46 -- common/autotest_common.sh@10 -- # set +x 00:21:04.343 20:12:46 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:04.343 20:12:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:04.343 20:12:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:04.343 20:12:46 -- common/autotest_common.sh@10 -- # set +x 00:21:04.343 ************************************ 00:21:04.343 START TEST nvmf_digest_error 00:21:04.343 ************************************ 00:21:04.343 20:12:46 -- common/autotest_common.sh@1111 -- # run_digest_error 00:21:04.343 20:12:46 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:04.343 20:12:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:04.343 20:12:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:04.343 20:12:46 -- common/autotest_common.sh@10 -- # set +x 00:21:04.343 20:12:46 -- nvmf/common.sh@470 -- # nvmfpid=76492 00:21:04.343 20:12:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:04.343 20:12:46 -- nvmf/common.sh@471 -- # waitforlisten 76492 00:21:04.343 20:12:46 -- common/autotest_common.sh@817 -- # '[' -z 76492 ']' 00:21:04.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.343 20:12:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.343 20:12:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:04.343 20:12:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.343 20:12:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:04.343 20:12:46 -- common/autotest_common.sh@10 -- # set +x 00:21:04.602 [2024-04-24 20:12:46.628246] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:21:04.602 [2024-04-24 20:12:46.628316] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.602 [2024-04-24 20:12:46.753115] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.602 [2024-04-24 20:12:46.853116] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.602 [2024-04-24 20:12:46.853169] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.602 [2024-04-24 20:12:46.853176] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.602 [2024-04-24 20:12:46.853181] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.602 [2024-04-24 20:12:46.853185] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.602 [2024-04-24 20:12:46.853208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.568 20:12:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:05.568 20:12:47 -- common/autotest_common.sh@850 -- # return 0 00:21:05.568 20:12:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:05.568 20:12:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:05.568 20:12:47 -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 20:12:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.568 20:12:47 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:05.568 20:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.568 20:12:47 -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 [2024-04-24 20:12:47.552336] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:05.568 20:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.568 20:12:47 -- host/digest.sh@105 -- # common_target_config 00:21:05.568 20:12:47 -- host/digest.sh@43 -- # rpc_cmd 00:21:05.568 20:12:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.568 20:12:47 -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 null0 00:21:05.568 [2024-04-24 20:12:47.649028] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.568 [2024-04-24 20:12:47.672907] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:05.568 [2024-04-24 20:12:47.673123] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.568 20:12:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.568 20:12:47 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:05.568 20:12:47 -- host/digest.sh@54 -- # local rw bs qd 00:21:05.568 20:12:47 -- host/digest.sh@56 -- # rw=randread 00:21:05.568 20:12:47 -- host/digest.sh@56 -- # bs=4096 00:21:05.568 20:12:47 -- host/digest.sh@56 -- # qd=128 00:21:05.568 20:12:47 -- host/digest.sh@58 -- # bperfpid=76524 00:21:05.568 20:12:47 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:05.568 20:12:47 -- host/digest.sh@60 -- # waitforlisten 76524 /var/tmp/bperf.sock 00:21:05.568 20:12:47 -- common/autotest_common.sh@817 -- # '[' -z 76524 ']' 00:21:05.568 20:12:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:05.568 20:12:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:05.568 20:12:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:05.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:05.568 20:12:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:05.568 20:12:47 -- common/autotest_common.sh@10 -- # set +x 00:21:05.568 [2024-04-24 20:12:47.728897] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:21:05.568 [2024-04-24 20:12:47.729046] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76524 ] 00:21:05.828 [2024-04-24 20:12:47.868105] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.828 [2024-04-24 20:12:47.957291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.396 20:12:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:06.396 20:12:48 -- common/autotest_common.sh@850 -- # return 0 00:21:06.396 20:12:48 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:06.396 20:12:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:06.654 20:12:48 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:06.654 20:12:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:06.654 20:12:48 -- common/autotest_common.sh@10 -- # set +x 00:21:06.654 20:12:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:06.654 20:12:48 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:06.654 20:12:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:06.912 nvme0n1 00:21:06.912 20:12:49 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:06.912 20:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:06.912 20:12:49 -- common/autotest_common.sh@10 -- # set +x 00:21:06.912 20:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:06.912 20:12:49 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:06.912 20:12:49 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:07.171 Running I/O for 2 seconds... 00:21:07.171 [2024-04-24 20:12:49.198190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.171 [2024-04-24 20:12:49.198241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.171 [2024-04-24 20:12:49.198252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.171 [2024-04-24 20:12:49.212556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.171 [2024-04-24 20:12:49.212588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.171 [2024-04-24 20:12:49.212596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.171 [2024-04-24 20:12:49.226673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.172 [2024-04-24 20:12:49.226704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.172 [2024-04-24 20:12:49.226713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.172 [2024-04-24 20:12:49.240736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.172 [2024-04-24 20:12:49.240766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.172 [2024-04-24 20:12:49.240773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.172 [2024-04-24 20:12:49.254611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.172 [2024-04-24 20:12:49.254638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.172 [2024-04-24 20:12:49.254646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.172 [2024-04-24 20:12:49.268672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.172 [2024-04-24 20:12:49.268703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.172 [2024-04-24 20:12:49.268712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.172 [2024-04-24 20:12:49.282128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.172 [2024-04-24 20:12:49.282175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.172 [2024-04-24 20:12:49.282183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.172 [2024-04-24 20:12:49.295944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.172 [2024-04-24 20:12:49.295974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.172 [2024-04-24 20:12:49.295982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.172 [2024-04-24 20:12:49.310104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.172 [2024-04-24 20:12:49.310135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.172 [2024-04-24 20:12:49.310143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.172 [2024-04-24 20:12:49.325487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.172 [2024-04-24 20:12:49.325522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.172 [2024-04-24 20:12:49.325532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.172 [2024-04-24 20:12:49.340012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.172 [2024-04-24 20:12:49.340044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.172 [2024-04-24 20:12:49.340053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.172 [2024-04-24 20:12:49.354467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.172 [2024-04-24 20:12:49.354544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.172 [2024-04-24 20:12:49.354553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.172 [2024-04-24 20:12:49.369100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.172 [2024-04-24 20:12:49.369142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.172 [2024-04-24 20:12:49.369150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.172 [2024-04-24 20:12:49.383730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.172 [2024-04-24 20:12:49.383771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.172 [2024-04-24 20:12:49.383780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.172 [2024-04-24 20:12:49.398135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.172 [2024-04-24 20:12:49.398175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.172 [2024-04-24 20:12:49.398183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.172 [2024-04-24 20:12:49.412756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.172 [2024-04-24 20:12:49.412794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.172 [2024-04-24 20:12:49.412802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.428507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.428558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.428568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.445204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.445268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.445279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.461272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.461324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.461334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.477510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.477557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.477567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.493454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.493499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.493509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.508560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.508598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.508607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.523634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.523693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.523703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.539881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.539938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.539948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.557107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.557158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.557169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.573795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.573841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.573852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.589927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.589975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.589986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.605457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.605503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.605513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.620973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.621016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.621026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.635646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.635694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.635701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.650147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.650182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.650190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.664594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.664624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.664631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.432 [2024-04-24 20:12:49.678867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.432 [2024-04-24 20:12:49.678899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.432 [2024-04-24 20:12:49.678908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.691 [2024-04-24 20:12:49.694586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.691 [2024-04-24 20:12:49.694620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.691 [2024-04-24 20:12:49.694628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.691 [2024-04-24 20:12:49.709071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.691 [2024-04-24 20:12:49.709099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.691 [2024-04-24 20:12:49.709107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.691 [2024-04-24 20:12:49.723125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.691 [2024-04-24 20:12:49.723156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.691 [2024-04-24 20:12:49.723165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.691 [2024-04-24 20:12:49.738755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.691 [2024-04-24 20:12:49.738786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.691 [2024-04-24 20:12:49.738795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.691 [2024-04-24 20:12:49.753385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.691 [2024-04-24 20:12:49.753415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.691 [2024-04-24 20:12:49.753423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.691 [2024-04-24 20:12:49.768395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.691 [2024-04-24 20:12:49.768429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.691 [2024-04-24 20:12:49.768436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.691 [2024-04-24 20:12:49.784312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.691 [2024-04-24 20:12:49.784350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.691 [2024-04-24 20:12:49.784359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.691 [2024-04-24 20:12:49.800428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.691 [2024-04-24 20:12:49.800466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.691 [2024-04-24 20:12:49.800475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.691 [2024-04-24 20:12:49.816657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.691 [2024-04-24 20:12:49.816701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.691 [2024-04-24 20:12:49.816711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.691 [2024-04-24 20:12:49.832163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.691 [2024-04-24 20:12:49.832203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.691 [2024-04-24 20:12:49.832211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.691 [2024-04-24 20:12:49.847266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.691 [2024-04-24 20:12:49.847307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.691 [2024-04-24 20:12:49.847316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.691 [2024-04-24 20:12:49.863170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.691 [2024-04-24 20:12:49.863212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.691 [2024-04-24 20:12:49.863222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.691 [2024-04-24 20:12:49.879297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.691 [2024-04-24 20:12:49.879338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.691 [2024-04-24 20:12:49.879347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.691 [2024-04-24 20:12:49.895578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.691 [2024-04-24 20:12:49.895616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.691 [2024-04-24 20:12:49.895626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.692 [2024-04-24 20:12:49.911891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.692 [2024-04-24 20:12:49.911927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.692 [2024-04-24 20:12:49.911935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.692 [2024-04-24 20:12:49.928017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.692 [2024-04-24 20:12:49.928050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.692 [2024-04-24 20:12:49.928059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.692 [2024-04-24 20:12:49.944223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.692 [2024-04-24 20:12:49.944258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.692 [2024-04-24 20:12:49.944266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.950 [2024-04-24 20:12:49.960549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.951 [2024-04-24 20:12:49.960586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.951 [2024-04-24 20:12:49.960596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.951 [2024-04-24 20:12:49.976594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.951 [2024-04-24 20:12:49.976639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.951 [2024-04-24 20:12:49.976649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.951 [2024-04-24 20:12:49.992926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.951 [2024-04-24 20:12:49.992972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.951 [2024-04-24 20:12:49.992982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.951 [2024-04-24 20:12:50.009258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.951 [2024-04-24 20:12:50.009303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.951 [2024-04-24 20:12:50.009312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.951 [2024-04-24 20:12:50.025591] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.951 [2024-04-24 20:12:50.025636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.951 [2024-04-24 20:12:50.025645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.951 [2024-04-24 20:12:50.040982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.951 [2024-04-24 20:12:50.041028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.951 [2024-04-24 20:12:50.041038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.951 [2024-04-24 20:12:50.055461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.951 [2024-04-24 20:12:50.055505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.951 [2024-04-24 20:12:50.055514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.951 [2024-04-24 20:12:50.070095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.951 [2024-04-24 20:12:50.070137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.951 [2024-04-24 20:12:50.070146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.951 [2024-04-24 20:12:50.085319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.951 [2024-04-24 20:12:50.085368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.951 [2024-04-24 20:12:50.085389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.951 [2024-04-24 20:12:50.100422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.951 [2024-04-24 20:12:50.100460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.951 [2024-04-24 20:12:50.100469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.951 [2024-04-24 20:12:50.114742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.951 [2024-04-24 20:12:50.114776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.951 [2024-04-24 20:12:50.114785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.951 [2024-04-24 20:12:50.129300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.951 [2024-04-24 20:12:50.129332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.951 [2024-04-24 20:12:50.129340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.951 [2024-04-24 20:12:50.143556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.951 [2024-04-24 20:12:50.143588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.951 [2024-04-24 20:12:50.143596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.951 [2024-04-24 20:12:50.165055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.951 [2024-04-24 20:12:50.165087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.951 [2024-04-24 20:12:50.165096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.951 [2024-04-24 20:12:50.180791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.951 [2024-04-24 20:12:50.180867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.951 [2024-04-24 20:12:50.180880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:07.951 [2024-04-24 20:12:50.196897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:07.951 [2024-04-24 20:12:50.196938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.951 [2024-04-24 20:12:50.196947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.209 [2024-04-24 20:12:50.212895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.209 [2024-04-24 20:12:50.212938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.209 [2024-04-24 20:12:50.212948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.209 [2024-04-24 20:12:50.228983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.209 [2024-04-24 20:12:50.229032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.209 [2024-04-24 20:12:50.229042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.209 [2024-04-24 20:12:50.244936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.209 [2024-04-24 20:12:50.244982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.209 [2024-04-24 20:12:50.244991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.209 [2024-04-24 20:12:50.259954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.209 [2024-04-24 20:12:50.260005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.209 [2024-04-24 20:12:50.260015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.210 [2024-04-24 20:12:50.275046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.210 [2024-04-24 20:12:50.275103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.210 [2024-04-24 20:12:50.275114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.210 [2024-04-24 20:12:50.290877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.210 [2024-04-24 20:12:50.290931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.210 [2024-04-24 20:12:50.290942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.210 [2024-04-24 20:12:50.306384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.210 [2024-04-24 20:12:50.306444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.210 [2024-04-24 20:12:50.306455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.210 [2024-04-24 20:12:50.321143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.210 [2024-04-24 20:12:50.321185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.210 [2024-04-24 20:12:50.321193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.210 [2024-04-24 20:12:50.335579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.210 [2024-04-24 20:12:50.335621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.210 [2024-04-24 20:12:50.335630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.210 [2024-04-24 20:12:50.350332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.210 [2024-04-24 20:12:50.350371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.210 [2024-04-24 20:12:50.350391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.210 [2024-04-24 20:12:50.364982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.210 [2024-04-24 20:12:50.365015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.210 [2024-04-24 20:12:50.365023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.210 [2024-04-24 20:12:50.379105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.210 [2024-04-24 20:12:50.379139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.210 [2024-04-24 20:12:50.379148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.210 [2024-04-24 20:12:50.394135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.210 [2024-04-24 20:12:50.394175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.210 [2024-04-24 20:12:50.394183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.210 [2024-04-24 20:12:50.409623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.210 [2024-04-24 20:12:50.409669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.210 [2024-04-24 20:12:50.409679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.210 [2024-04-24 20:12:50.425572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.210 [2024-04-24 20:12:50.425617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.210 [2024-04-24 20:12:50.425627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.210 [2024-04-24 20:12:50.441474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.210 [2024-04-24 20:12:50.441516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.210 [2024-04-24 20:12:50.441527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.210 [2024-04-24 20:12:50.457778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.210 [2024-04-24 20:12:50.457823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.210 [2024-04-24 20:12:50.457833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.469 [2024-04-24 20:12:50.473588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.469 [2024-04-24 20:12:50.473634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.469 [2024-04-24 20:12:50.473644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.469 [2024-04-24 20:12:50.489516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.469 [2024-04-24 20:12:50.489560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.469 [2024-04-24 20:12:50.489570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.469 [2024-04-24 20:12:50.505171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.469 [2024-04-24 20:12:50.505215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.469 [2024-04-24 20:12:50.505225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.469 [2024-04-24 20:12:50.520551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.469 [2024-04-24 20:12:50.520592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.469 [2024-04-24 20:12:50.520601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.469 [2024-04-24 20:12:50.535857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.469 [2024-04-24 20:12:50.535902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.469 [2024-04-24 20:12:50.535912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.469 [2024-04-24 20:12:50.552173] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.469 [2024-04-24 20:12:50.552222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.469 [2024-04-24 20:12:50.552234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.469 [2024-04-24 20:12:50.568810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.469 [2024-04-24 20:12:50.568869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.469 [2024-04-24 20:12:50.568880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.469 [2024-04-24 20:12:50.585310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.469 [2024-04-24 20:12:50.585352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.469 [2024-04-24 20:12:50.585364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.469 [2024-04-24 20:12:50.601988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.470 [2024-04-24 20:12:50.602042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.470 [2024-04-24 20:12:50.602053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.470 [2024-04-24 20:12:50.618261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.470 [2024-04-24 20:12:50.618312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.470 [2024-04-24 20:12:50.618323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.470 [2024-04-24 20:12:50.634231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.470 [2024-04-24 20:12:50.634280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.470 [2024-04-24 20:12:50.634291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.470 [2024-04-24 20:12:50.650285] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.470 [2024-04-24 20:12:50.650336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.470 [2024-04-24 20:12:50.650347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.470 [2024-04-24 20:12:50.666584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.470 [2024-04-24 20:12:50.666636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.470 [2024-04-24 20:12:50.666645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.470 [2024-04-24 20:12:50.682346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.470 [2024-04-24 20:12:50.682407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.470 [2024-04-24 20:12:50.682416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.470 [2024-04-24 20:12:50.697531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.470 [2024-04-24 20:12:50.697577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.470 [2024-04-24 20:12:50.697586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.470 [2024-04-24 20:12:50.712638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.470 [2024-04-24 20:12:50.712688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.470 [2024-04-24 20:12:50.712698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.729 [2024-04-24 20:12:50.728657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.729 [2024-04-24 20:12:50.728711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.729 [2024-04-24 20:12:50.728721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.729 [2024-04-24 20:12:50.744847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.729 [2024-04-24 20:12:50.744896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.729 [2024-04-24 20:12:50.744905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.729 [2024-04-24 20:12:50.760840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.729 [2024-04-24 20:12:50.760887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.729 [2024-04-24 20:12:50.760897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.729 [2024-04-24 20:12:50.776964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.729 [2024-04-24 20:12:50.777010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.729 [2024-04-24 20:12:50.777019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.729 [2024-04-24 20:12:50.792775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.729 [2024-04-24 20:12:50.792812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.729 [2024-04-24 20:12:50.792821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.729 [2024-04-24 20:12:50.807021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.729 [2024-04-24 20:12:50.807058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.729 [2024-04-24 20:12:50.807066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.729 [2024-04-24 20:12:50.822182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.729 [2024-04-24 20:12:50.822235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.729 [2024-04-24 20:12:50.822245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.729 [2024-04-24 20:12:50.838435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.729 [2024-04-24 20:12:50.838488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.729 [2024-04-24 20:12:50.838499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.729 [2024-04-24 20:12:50.854411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.729 [2024-04-24 20:12:50.854469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.729 [2024-04-24 20:12:50.854488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.729 [2024-04-24 20:12:50.870330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.730 [2024-04-24 20:12:50.870404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.730 [2024-04-24 20:12:50.870415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.730 [2024-04-24 20:12:50.886766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.730 [2024-04-24 20:12:50.886821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.730 [2024-04-24 20:12:50.886830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.730 [2024-04-24 20:12:50.902908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.730 [2024-04-24 20:12:50.902961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.730 [2024-04-24 20:12:50.902971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.730 [2024-04-24 20:12:50.918269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.730 [2024-04-24 20:12:50.918314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.730 [2024-04-24 20:12:50.918322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.730 [2024-04-24 20:12:50.933090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.730 [2024-04-24 20:12:50.933129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.730 [2024-04-24 20:12:50.933138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.730 [2024-04-24 20:12:50.947220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.730 [2024-04-24 20:12:50.947255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.730 [2024-04-24 20:12:50.947263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.730 [2024-04-24 20:12:50.961445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.730 [2024-04-24 20:12:50.961479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.730 [2024-04-24 20:12:50.961488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.730 [2024-04-24 20:12:50.976348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.730 [2024-04-24 20:12:50.976393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.730 [2024-04-24 20:12:50.976403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.989 [2024-04-24 20:12:50.991736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.989 [2024-04-24 20:12:50.991776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.989 [2024-04-24 20:12:50.991786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.989 [2024-04-24 20:12:51.006814] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.989 [2024-04-24 20:12:51.006847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.989 [2024-04-24 20:12:51.006855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.989 [2024-04-24 20:12:51.022647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.989 [2024-04-24 20:12:51.022689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.989 [2024-04-24 20:12:51.022698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.989 [2024-04-24 20:12:51.038877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.989 [2024-04-24 20:12:51.038920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.989 [2024-04-24 20:12:51.038929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.989 [2024-04-24 20:12:51.055119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.989 [2024-04-24 20:12:51.055159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.989 [2024-04-24 20:12:51.055168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.989 [2024-04-24 20:12:51.071018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.989 [2024-04-24 20:12:51.071059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.989 [2024-04-24 20:12:51.071068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.989 [2024-04-24 20:12:51.086924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.989 [2024-04-24 20:12:51.086968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.989 [2024-04-24 20:12:51.086977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.989 [2024-04-24 20:12:51.102681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.989 [2024-04-24 20:12:51.102723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.989 [2024-04-24 20:12:51.102733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.989 [2024-04-24 20:12:51.118064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.989 [2024-04-24 20:12:51.118101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.989 [2024-04-24 20:12:51.118109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.989 [2024-04-24 20:12:51.133154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.989 [2024-04-24 20:12:51.133189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.989 [2024-04-24 20:12:51.133198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.989 [2024-04-24 20:12:51.148237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.989 [2024-04-24 20:12:51.148268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.989 [2024-04-24 20:12:51.148276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.989 [2024-04-24 20:12:51.169955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1824460) 00:21:08.989 [2024-04-24 20:12:51.169988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.989 [2024-04-24 20:12:51.169997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:08.989 00:21:08.989 Latency(us) 00:21:08.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.989 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:08.989 nvme0n1 : 2.00 16347.66 63.86 0.00 0.00 7824.32 6610.84 29534.13 00:21:08.989 =================================================================================================================== 00:21:08.989 Total : 16347.66 63.86 0.00 0.00 7824.32 6610.84 29534.13 00:21:08.989 0 00:21:08.989 20:12:51 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:08.989 20:12:51 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:08.989 20:12:51 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:08.989 | .driver_specific 00:21:08.989 | .nvme_error 00:21:08.989 | .status_code 00:21:08.989 | .command_transient_transport_error' 00:21:08.989 20:12:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:09.249 20:12:51 -- host/digest.sh@71 -- # (( 128 > 0 )) 00:21:09.249 20:12:51 -- host/digest.sh@73 -- # killprocess 76524 00:21:09.249 20:12:51 -- common/autotest_common.sh@936 -- # '[' -z 76524 ']' 00:21:09.249 20:12:51 -- common/autotest_common.sh@940 -- # kill -0 76524 00:21:09.249 20:12:51 -- common/autotest_common.sh@941 -- # uname 00:21:09.249 20:12:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:09.249 20:12:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76524 00:21:09.249 killing process with pid 76524 00:21:09.249 Received shutdown signal, test time was about 2.000000 seconds 00:21:09.249 00:21:09.249 Latency(us) 00:21:09.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.249 =================================================================================================================== 00:21:09.249 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:09.249 20:12:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:09.249 20:12:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:09.249 20:12:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76524' 00:21:09.249 20:12:51 -- common/autotest_common.sh@955 -- # kill 76524 00:21:09.249 20:12:51 -- common/autotest_common.sh@960 -- # wait 76524 00:21:09.507 20:12:51 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:09.507 20:12:51 -- host/digest.sh@54 -- # local rw bs qd 00:21:09.507 20:12:51 -- host/digest.sh@56 -- # rw=randread 00:21:09.507 20:12:51 -- host/digest.sh@56 -- # bs=131072 00:21:09.507 20:12:51 -- host/digest.sh@56 -- # qd=16 00:21:09.507 20:12:51 -- host/digest.sh@58 -- # bperfpid=76583 00:21:09.507 20:12:51 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:09.507 20:12:51 -- host/digest.sh@60 -- # waitforlisten 76583 /var/tmp/bperf.sock 00:21:09.507 20:12:51 -- common/autotest_common.sh@817 -- # '[' -z 76583 ']' 00:21:09.507 20:12:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:09.507 20:12:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:09.507 20:12:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:09.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:09.507 20:12:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:09.507 20:12:51 -- common/autotest_common.sh@10 -- # set +x 00:21:09.507 [2024-04-24 20:12:51.724334] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:21:09.507 [2024-04-24 20:12:51.724530] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:21:09.507 Zero copy mechanism will not be used. 00:21:09.507 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76583 ] 00:21:09.766 [2024-04-24 20:12:51.847703] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.766 [2024-04-24 20:12:51.968530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.749 20:12:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:10.749 20:12:52 -- common/autotest_common.sh@850 -- # return 0 00:21:10.749 20:12:52 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:10.749 20:12:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:10.749 20:12:52 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:10.749 20:12:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.749 20:12:52 -- common/autotest_common.sh@10 -- # set +x 00:21:10.749 20:12:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.749 20:12:52 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:10.749 20:12:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:11.008 nvme0n1 00:21:11.008 20:12:53 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:11.008 20:12:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.008 20:12:53 -- common/autotest_common.sh@10 -- # set +x 00:21:11.008 20:12:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.008 20:12:53 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:11.008 20:12:53 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:11.008 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:11.008 Zero copy mechanism will not be used. 00:21:11.008 Running I/O for 2 seconds... 00:21:11.008 [2024-04-24 20:12:53.237791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.008 [2024-04-24 20:12:53.237848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.008 [2024-04-24 20:12:53.237861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.008 [2024-04-24 20:12:53.241991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.008 [2024-04-24 20:12:53.242030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.008 [2024-04-24 20:12:53.242041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.008 [2024-04-24 20:12:53.246180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.008 [2024-04-24 20:12:53.246214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.008 [2024-04-24 20:12:53.246223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.008 [2024-04-24 20:12:53.250262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.008 [2024-04-24 20:12:53.250296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.008 [2024-04-24 20:12:53.250304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.008 [2024-04-24 20:12:53.254286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.008 [2024-04-24 20:12:53.254317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.008 [2024-04-24 20:12:53.254325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.008 [2024-04-24 20:12:53.258409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.008 [2024-04-24 20:12:53.258442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.008 [2024-04-24 20:12:53.258451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.262709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.262745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.262755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.266916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.266950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.266959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.271129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.271166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.271176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.275202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.275236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.275245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.279267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.279300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.279309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.283259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.283293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.283302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.287256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.287289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.287298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.291199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.291234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.291243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.295371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.295425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.295435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.299628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.299674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.299684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.303971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.304005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.304013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.308269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.308306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.308314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.312469] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.312503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.312511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.316677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.316710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.316719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.320685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.320717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.320726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.324678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.324707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.324716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.328719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.328749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.328757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.332734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.332766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.332773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.336826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.336856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.336864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.340785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.340815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.340823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.344751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.344780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.344788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.348799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.348832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.348840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.352807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.352836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.352843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.356724] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.356754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.356762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.268 [2024-04-24 20:12:53.360633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.268 [2024-04-24 20:12:53.360662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.268 [2024-04-24 20:12:53.360670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.364512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.364540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.364548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.368440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.368466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.368473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.372352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.372395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.372405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.376565] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.376593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.376601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.380363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.380400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.380408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.384201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.384231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.384239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.387936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.387967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.387974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.391914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.391945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.391952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.395752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.395782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.395789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.399646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.399677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.399686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.403431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.403458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.403466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.407273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.407304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.407311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.411288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.411320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.411329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.415306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.415337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.415345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.419196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.419226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.419234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.423223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.423255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.423264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.427173] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.427204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.427212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.431148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.431180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.431189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.435058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.435087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.435095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.439291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.439327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.439336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.443479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.443509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.443518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.447610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.447658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.447668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.451730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.451761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.451770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.455809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.455838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.455846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.459936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.459981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.459989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.463979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.464008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.464016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.468065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.468095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.468103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.472056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.472086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.472093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.475937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.475967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.475974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.479670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.479699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.479706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.483442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.483469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.483476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.487162] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.487192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.487200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.491124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.491154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.491162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.494878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.494909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.494917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.498644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.498672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.498680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.502408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.502435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.502442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.506095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.506123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.506131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.509978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.510007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.510015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.513828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.513858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.513865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.269 [2024-04-24 20:12:53.517804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.269 [2024-04-24 20:12:53.517832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.269 [2024-04-24 20:12:53.517839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.530 [2024-04-24 20:12:53.521673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.521702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.521710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.525644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.525673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.525680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.529497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.529524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.529531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.533365] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.533402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.533410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.537170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.537199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.537206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.541112] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.541142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.541150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.545019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.545048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.545055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.548994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.549023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.549031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.552973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.553003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.553011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.556940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.556970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.556977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.560875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.560903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.560910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.564799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.564830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.564837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.568708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.568738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.568746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.572725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.572770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.572778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.576577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.576605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.576613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.580488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.580519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.580527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.584378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.584416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.584424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.588435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.588463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.588471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.592420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.592448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.592456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.596548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.596578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.596586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.600553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.600580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.600587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.604405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.604433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.604440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.608271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.608301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.608309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.612271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.612301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.612309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.616260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.616289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.616296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.620326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.620359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.620368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.531 [2024-04-24 20:12:53.624433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.531 [2024-04-24 20:12:53.624467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.531 [2024-04-24 20:12:53.624476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.628530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.628560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.628569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.632615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.632647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.632656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.636954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.636990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.636999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.641286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.641322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.641332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.645594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.645625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.645635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.649858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.649892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.649900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.654042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.654077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.654085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.658345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.658373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.658397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.662624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.662657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.662665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.666766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.666800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.666809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.670739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.670771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.670778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.674671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.674703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.674711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.678741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.678771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.678780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.682966] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.683001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.683010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.687028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.687060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.687068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.691043] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.691075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.691083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.695192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.695224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.695232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.699213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.699246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.699255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.703408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.703437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.703446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.707510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.707541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.707549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.711605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.711636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.711645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.715658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.715691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.715699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.719890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.719922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.719930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.723891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.723922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.723929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.728003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.728032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.728040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.732016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.732045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.732052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.532 [2024-04-24 20:12:53.736047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.532 [2024-04-24 20:12:53.736076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.532 [2024-04-24 20:12:53.736083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.533 [2024-04-24 20:12:53.740069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.533 [2024-04-24 20:12:53.740097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.533 [2024-04-24 20:12:53.740104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.533 [2024-04-24 20:12:53.744153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.533 [2024-04-24 20:12:53.744183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.533 [2024-04-24 20:12:53.744190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.533 [2024-04-24 20:12:53.748115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.533 [2024-04-24 20:12:53.748144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.533 [2024-04-24 20:12:53.748152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.533 [2024-04-24 20:12:53.752147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.533 [2024-04-24 20:12:53.752177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.533 [2024-04-24 20:12:53.752184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.533 [2024-04-24 20:12:53.756108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.533 [2024-04-24 20:12:53.756137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.533 [2024-04-24 20:12:53.756144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.533 [2024-04-24 20:12:53.760072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.533 [2024-04-24 20:12:53.760102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.533 [2024-04-24 20:12:53.760110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.533 [2024-04-24 20:12:53.764036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.533 [2024-04-24 20:12:53.764064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.533 [2024-04-24 20:12:53.764072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.533 [2024-04-24 20:12:53.768029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.533 [2024-04-24 20:12:53.768059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.533 [2024-04-24 20:12:53.768067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.533 [2024-04-24 20:12:53.771900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.533 [2024-04-24 20:12:53.771929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.533 [2024-04-24 20:12:53.771936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.533 [2024-04-24 20:12:53.775894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.533 [2024-04-24 20:12:53.775924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.533 [2024-04-24 20:12:53.775932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.533 [2024-04-24 20:12:53.779994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.533 [2024-04-24 20:12:53.780024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.533 [2024-04-24 20:12:53.780031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.784025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.784055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.784062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.788073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.788101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.788108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.792162] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.792193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.792200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.796130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.796161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.796169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.800338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.800372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.800395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.804422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.804450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.804457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.808351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.808392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.808401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.812391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.812432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.812440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.816189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.816219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.816226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.820097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.820127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.820134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.824011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.824043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.824051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.828146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.828178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.828185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.832114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.832145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.832153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.836124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.836158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.836166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.840190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.840221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.840230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.844318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.844355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.844364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.848604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.848639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.848648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.852666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.852699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.852708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.856725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.856755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.856762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.860696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.860726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.860734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.864579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.794 [2024-04-24 20:12:53.864607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.794 [2024-04-24 20:12:53.864614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.794 [2024-04-24 20:12:53.868526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.868555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.868563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.872371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.872408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.872415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.876119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.876147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.876155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.880257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.880286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.880294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.884331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.884360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.884367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.888288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.888318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.888326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.892280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.892310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.892317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.896292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.896320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.896328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.900211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.900240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.900247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.904264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.904294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.904301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.908226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.908256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.908264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.912111] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.912142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.912150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.915893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.915923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.915930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.919850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.919881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.919890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.923984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.924014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.924022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.928121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.928153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.928162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.932004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.932033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.932041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.936101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.936135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.936144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.940310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.940343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.940351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.944505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.944537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.944545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.948610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.948642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.948651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.952836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.952868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.952876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.956962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.956991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.956999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.961096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.961125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.961132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.965180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.965210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.965217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.969312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.969341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.969347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.973242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.795 [2024-04-24 20:12:53.973270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.795 [2024-04-24 20:12:53.973277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.795 [2024-04-24 20:12:53.977215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:53.977243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:53.977250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.796 [2024-04-24 20:12:53.981118] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:53.981150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:53.981157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.796 [2024-04-24 20:12:53.985087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:53.985116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:53.985123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.796 [2024-04-24 20:12:53.989022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:53.989051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:53.989059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.796 [2024-04-24 20:12:53.993077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:53.993106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:53.993114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.796 [2024-04-24 20:12:53.997061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:53.997090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:53.997097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.796 [2024-04-24 20:12:54.001233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:54.001265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:54.001274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.796 [2024-04-24 20:12:54.005246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:54.005275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:54.005282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.796 [2024-04-24 20:12:54.009349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:54.009387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:54.009396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.796 [2024-04-24 20:12:54.013359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:54.013398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:54.013406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.796 [2024-04-24 20:12:54.017327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:54.017358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:54.017365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.796 [2024-04-24 20:12:54.021351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:54.021391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:54.021399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.796 [2024-04-24 20:12:54.025412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:54.025438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:54.025445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:11.796 [2024-04-24 20:12:54.029364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:54.029411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:54.029421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:11.796 [2024-04-24 20:12:54.033360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:54.033402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:54.033411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:11.796 [2024-04-24 20:12:54.037556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:54.037584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:54.037591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:11.796 [2024-04-24 20:12:54.041503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:11.796 [2024-04-24 20:12:54.041531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:11.796 [2024-04-24 20:12:54.041539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.045785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.045819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.045828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.049977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.050020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.050032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.054312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.054346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.054355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.058344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.058382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.058390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.062417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.062443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.062451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.066489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.066517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.066525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.070708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.070740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.070749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.075008] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.075045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.075055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.079278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.079314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.079324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.083444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.083475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.083484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.087543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.087573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.087581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.091716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.091750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.091758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.095841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.095874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.095882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.099816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.099847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.099855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.103799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.103829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.103836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.107805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.107836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.107844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.111792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.111822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.111830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.115681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.115711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.115719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.119832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.119863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.119872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.123748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.123778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.123785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.127798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.127829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.127837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.131766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.131795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.131803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.135711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.135740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.135747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.139578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.139609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.139617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.143421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.143448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.143455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.147237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.060 [2024-04-24 20:12:54.147268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.060 [2024-04-24 20:12:54.147277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.060 [2024-04-24 20:12:54.151309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.151341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.151349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.155336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.155368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.155387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.159531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.159560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.159569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.163506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.163535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.163543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.167725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.167757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.167766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.171758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.171790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.171798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.175802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.175833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.175841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.179971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.180004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.180013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.184210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.184243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.184251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.188576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.188607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.188615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.192763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.192795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.192804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.197194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.197230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.197239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.201539] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.201573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.201582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.205769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.205814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.205823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.210033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.210065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.210074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.214302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.214334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.214342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.218543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.218575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.218584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.222784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.222817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.222826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.227037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.227071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.227080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.231240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.231274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.231282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.235417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.235446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.235454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.239687] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.239719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.239727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.243848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.243882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.243890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.247952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.247985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.247994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.252111] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.252144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.252153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.256239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.256272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.256280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.061 [2024-04-24 20:12:54.260398] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.061 [2024-04-24 20:12:54.260428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.061 [2024-04-24 20:12:54.260436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.062 [2024-04-24 20:12:54.264522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.062 [2024-04-24 20:12:54.264551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.062 [2024-04-24 20:12:54.264559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.062 [2024-04-24 20:12:54.268686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.062 [2024-04-24 20:12:54.268717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.062 [2024-04-24 20:12:54.268725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.062 [2024-04-24 20:12:54.272973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.062 [2024-04-24 20:12:54.273004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.062 [2024-04-24 20:12:54.273011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.062 [2024-04-24 20:12:54.277078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.062 [2024-04-24 20:12:54.277108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.062 [2024-04-24 20:12:54.277115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.062 [2024-04-24 20:12:54.281078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.062 [2024-04-24 20:12:54.281106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.062 [2024-04-24 20:12:54.281113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.062 [2024-04-24 20:12:54.285108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.062 [2024-04-24 20:12:54.285138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.062 [2024-04-24 20:12:54.285145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.062 [2024-04-24 20:12:54.289135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.062 [2024-04-24 20:12:54.289165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.062 [2024-04-24 20:12:54.289173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.062 [2024-04-24 20:12:54.293186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.062 [2024-04-24 20:12:54.293216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.062 [2024-04-24 20:12:54.293224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.062 [2024-04-24 20:12:54.297297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.062 [2024-04-24 20:12:54.297326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.062 [2024-04-24 20:12:54.297333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.062 [2024-04-24 20:12:54.301426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.062 [2024-04-24 20:12:54.301452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.062 [2024-04-24 20:12:54.301460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.062 [2024-04-24 20:12:54.305461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.062 [2024-04-24 20:12:54.305489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.062 [2024-04-24 20:12:54.305496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.062 [2024-04-24 20:12:54.309550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.062 [2024-04-24 20:12:54.309580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.062 [2024-04-24 20:12:54.309588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.326 [2024-04-24 20:12:54.313865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.326 [2024-04-24 20:12:54.313900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.326 [2024-04-24 20:12:54.313908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.326 [2024-04-24 20:12:54.318044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.326 [2024-04-24 20:12:54.318078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.326 [2024-04-24 20:12:54.318086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.326 [2024-04-24 20:12:54.322206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.326 [2024-04-24 20:12:54.322238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.326 [2024-04-24 20:12:54.322247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.326 [2024-04-24 20:12:54.326411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.326 [2024-04-24 20:12:54.326440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.326 [2024-04-24 20:12:54.326448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.326 [2024-04-24 20:12:54.330621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.326 [2024-04-24 20:12:54.330653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.326 [2024-04-24 20:12:54.330662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.326 [2024-04-24 20:12:54.334765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.326 [2024-04-24 20:12:54.334796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.334805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.338919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.338951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.338959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.343203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.343236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.343244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.347354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.347400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.347410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.351458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.351489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.351497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.355549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.355581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.355590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.359655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.359688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.359697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.363730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.363764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.363773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.367760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.367791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.367799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.371837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.371870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.371879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.375882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.375911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.375919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.379777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.379808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.379815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.383814] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.383847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.383855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.387813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.387848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.387856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.392009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.392045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.392054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.396088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.396122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.396131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.400088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.400123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.400133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.404244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.404279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.404288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.408396] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.408427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.408436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.412582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.412612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.412620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.416669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.416701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.416709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.420860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.420894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.420902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.424989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.425021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.425029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.429116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.429149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.429158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.433280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.433312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.433320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.437364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.437405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.437429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.441480] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.441508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.327 [2024-04-24 20:12:54.441515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.327 [2024-04-24 20:12:54.445470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.327 [2024-04-24 20:12:54.445501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.445508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.449470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.449496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.449504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.453517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.453546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.453554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.457736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.457767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.457775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.461902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.461933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.461941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.466076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.466107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.466115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.470279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.470311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.470320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.474436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.474464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.474472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.478509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.478556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.478565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.482498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.482544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.482554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.486587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.486616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.486625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.490665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.490695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.490703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.494684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.494715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.494723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.498739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.498769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.498778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.502712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.502743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.502751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.506851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.506883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.506891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.510892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.510925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.510933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.514821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.514851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.514858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.518741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.518770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.518778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.522733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.522764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.522773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.526804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.526836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.526845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.530792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.530823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.530831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.534900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.534933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.534942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.538905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.538945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.538954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.542973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.543005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.543014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.547012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.547044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.547052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.551242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.551274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.328 [2024-04-24 20:12:54.551282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.328 [2024-04-24 20:12:54.555407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.328 [2024-04-24 20:12:54.555436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.329 [2024-04-24 20:12:54.555444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.329 [2024-04-24 20:12:54.559595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.329 [2024-04-24 20:12:54.559629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.329 [2024-04-24 20:12:54.559638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.329 [2024-04-24 20:12:54.563840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.329 [2024-04-24 20:12:54.563872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.329 [2024-04-24 20:12:54.563880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.329 [2024-04-24 20:12:54.567949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.329 [2024-04-24 20:12:54.567981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.329 [2024-04-24 20:12:54.567989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.329 [2024-04-24 20:12:54.572275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.329 [2024-04-24 20:12:54.572308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.329 [2024-04-24 20:12:54.572316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.329 [2024-04-24 20:12:54.576662] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.329 [2024-04-24 20:12:54.576697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.329 [2024-04-24 20:12:54.576706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.590 [2024-04-24 20:12:54.580982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.590 [2024-04-24 20:12:54.581015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.590 [2024-04-24 20:12:54.581024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.590 [2024-04-24 20:12:54.585192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.590 [2024-04-24 20:12:54.585227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.590 [2024-04-24 20:12:54.585236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.590 [2024-04-24 20:12:54.589474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.590 [2024-04-24 20:12:54.589504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.590 [2024-04-24 20:12:54.589513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.590 [2024-04-24 20:12:54.593672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.593703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.593711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.597876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.597909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.597917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.602021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.602053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.602061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.606222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.606253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.606261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.610561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.610591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.610600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.614685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.614716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.614725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.618838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.618872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.618881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.623063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.623097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.623106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.627292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.627326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.627335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.631508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.631539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.631548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.635768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.635798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.635807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.640091] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.640124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.640132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.644347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.644390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.644415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.648660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.648694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.648702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.653006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.653038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.653046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.657350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.657395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.657405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.661627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.661659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.661668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.665902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.665935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.665944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.670184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.670217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.670227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.674509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.674557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.674566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.678916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.678948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.678957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.683437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.683469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.683479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.687925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.687960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.687969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.692343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.692391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.692400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.696543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.696574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.696582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.700742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.700771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.591 [2024-04-24 20:12:54.700779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.591 [2024-04-24 20:12:54.704610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.591 [2024-04-24 20:12:54.704638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.704645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.708635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.708664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.708671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.712870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.712902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.712911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.717184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.717214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.717221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.721235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.721264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.721272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.725201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.725231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.725238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.729181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.729210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.729218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.733332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.733364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.733373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.737664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.737693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.737701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.741948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.741978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.741987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.746097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.746126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.746133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.750046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.750074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.750082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.754005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.754034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.754041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.758105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.758134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.758141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.762005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.762033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.762039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.765902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.765930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.765937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.769756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.769784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.769791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.773576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.773603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.773610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.777448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.777474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.777481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.781341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.781370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.781388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.785329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.785357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.785365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.789287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.789316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.789324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.793193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.793222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.793230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.797108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.797137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.797144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.801000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.801030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.801037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.805038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.805067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.805074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.808987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.592 [2024-04-24 20:12:54.809016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.592 [2024-04-24 20:12:54.809023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.592 [2024-04-24 20:12:54.812843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.593 [2024-04-24 20:12:54.812872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.593 [2024-04-24 20:12:54.812879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.593 [2024-04-24 20:12:54.816762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.593 [2024-04-24 20:12:54.816790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.593 [2024-04-24 20:12:54.816798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.593 [2024-04-24 20:12:54.820675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.593 [2024-04-24 20:12:54.820705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.593 [2024-04-24 20:12:54.820712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.593 [2024-04-24 20:12:54.824631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.593 [2024-04-24 20:12:54.824659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.593 [2024-04-24 20:12:54.824667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.593 [2024-04-24 20:12:54.828595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.593 [2024-04-24 20:12:54.828624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.593 [2024-04-24 20:12:54.828631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.593 [2024-04-24 20:12:54.832514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.593 [2024-04-24 20:12:54.832541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.593 [2024-04-24 20:12:54.832548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.593 [2024-04-24 20:12:54.836400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.593 [2024-04-24 20:12:54.836428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.593 [2024-04-24 20:12:54.836436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.593 [2024-04-24 20:12:54.840463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.593 [2024-04-24 20:12:54.840493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.593 [2024-04-24 20:12:54.840501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.853 [2024-04-24 20:12:54.844556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.853 [2024-04-24 20:12:54.844585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.853 [2024-04-24 20:12:54.844594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.848607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.848637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.848645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.852637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.852668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.852675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.856531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.856558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.856565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.860502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.860531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.860539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.864619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.864651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.864659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.868749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.868781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.868789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.872903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.872938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.872946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.877070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.877104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.877112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.881255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.881290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.881298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.885367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.885411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.885419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.889589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.889623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.889632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.893795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.893829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.893838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.897976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.898011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.898019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.902146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.902180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.902189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.906188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.906219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.906227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.910774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.910818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.910826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.914811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.914841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.914849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.918936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.918968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.918977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.923212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.923246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.923255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.927427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.927458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.927467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.931715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.931750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.931759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.936005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.936039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.936048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.940286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.940318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.940327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.944551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.944581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.944589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.948862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.948895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.948904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.953176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.953209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.953218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.854 [2024-04-24 20:12:54.957527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.854 [2024-04-24 20:12:54.957561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.854 [2024-04-24 20:12:54.957569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:54.961820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:54.961855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:54.961865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:54.966217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:54.966251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:54.966260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:54.970736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:54.970773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:54.970783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:54.975111] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:54.975148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:54.975158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:54.979466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:54.979498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:54.979507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:54.983756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:54.983790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:54.983799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:54.987989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:54.988023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:54.988032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:54.992270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:54.992304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:54.992313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:54.996608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:54.996642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:54.996651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.001044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.001079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.001088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.005457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.005488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.005497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.009786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.009818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.009827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.014127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.014162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.014171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.018416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.018447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.018456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.022722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.022757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.022767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.026941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.026975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.026984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.031334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.031369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.031396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.035607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.035639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.035648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.040065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.040101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.040110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.044499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.044530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.044538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.048830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.048863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.048872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.053102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.053135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.053144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.057451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.057482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.057491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.061722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.061755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.061764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.066039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.066074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.066084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.070392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.070423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.070433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.855 [2024-04-24 20:12:55.074745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.855 [2024-04-24 20:12:55.074778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.855 [2024-04-24 20:12:55.074787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.856 [2024-04-24 20:12:55.079168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.856 [2024-04-24 20:12:55.079202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.856 [2024-04-24 20:12:55.079212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.856 [2024-04-24 20:12:55.083739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.856 [2024-04-24 20:12:55.083775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.856 [2024-04-24 20:12:55.083784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.856 [2024-04-24 20:12:55.088011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.856 [2024-04-24 20:12:55.088046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.856 [2024-04-24 20:12:55.088055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.856 [2024-04-24 20:12:55.092319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.856 [2024-04-24 20:12:55.092354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.856 [2024-04-24 20:12:55.092363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:12.856 [2024-04-24 20:12:55.096583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.856 [2024-04-24 20:12:55.096616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.856 [2024-04-24 20:12:55.096625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:12.856 [2024-04-24 20:12:55.100765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.856 [2024-04-24 20:12:55.100798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.856 [2024-04-24 20:12:55.100807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:12.856 [2024-04-24 20:12:55.105174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:12.856 [2024-04-24 20:12:55.105206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:12.856 [2024-04-24 20:12:55.105214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:13.115 [2024-04-24 20:12:55.109600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.109630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.109638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.113901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.113932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.113941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.117863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.117892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.117899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.122005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.122034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.122041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.126193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.126224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.126231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.130179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.130210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.130218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.134061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.134090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.134097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.137927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.137956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.137963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.141797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.141826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.141833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.145672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.145700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.145706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.149549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.149575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.149583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.153458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.153485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.153492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.157404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.157430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.157438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.161255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.161286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.161293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.165139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.165169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.165175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.169050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.169079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.169086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.172997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.173027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.173034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.176890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.176919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.176926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.180718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.180747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.180754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.184539] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.184565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.184572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.188294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.188323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.188330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.192089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.192117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.192124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.195846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.195874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.195881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.199653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.199682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.199689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.203383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.203421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.203429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.207131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.207160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.207167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:13.116 [2024-04-24 20:12:55.211095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.116 [2024-04-24 20:12:55.211126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.116 [2024-04-24 20:12:55.211135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.117 [2024-04-24 20:12:55.215057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.117 [2024-04-24 20:12:55.215088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.117 [2024-04-24 20:12:55.215096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:13.117 [2024-04-24 20:12:55.219152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.117 [2024-04-24 20:12:55.219184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.117 [2024-04-24 20:12:55.219192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:13.117 [2024-04-24 20:12:55.223071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.117 [2024-04-24 20:12:55.223102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.117 [2024-04-24 20:12:55.223111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:13.117 [2024-04-24 20:12:55.227095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2040530) 00:21:13.117 [2024-04-24 20:12:55.227127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:13.117 [2024-04-24 20:12:55.227135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:13.117 00:21:13.117 Latency(us) 00:21:13.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.117 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:13.117 nvme0n1 : 2.00 7562.92 945.36 0.00 0.00 2112.87 1774.34 5122.68 00:21:13.117 =================================================================================================================== 00:21:13.117 Total : 7562.92 945.36 0.00 0.00 2112.87 1774.34 5122.68 00:21:13.117 0 00:21:13.117 20:12:55 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:13.117 20:12:55 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:13.117 20:12:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:13.117 20:12:55 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:13.117 | .driver_specific 00:21:13.117 | .nvme_error 00:21:13.117 | .status_code 00:21:13.117 | .command_transient_transport_error' 00:21:13.375 20:12:55 -- host/digest.sh@71 -- # (( 488 > 0 )) 00:21:13.375 20:12:55 -- host/digest.sh@73 -- # killprocess 76583 00:21:13.375 20:12:55 -- common/autotest_common.sh@936 -- # '[' -z 76583 ']' 00:21:13.375 20:12:55 -- common/autotest_common.sh@940 -- # kill -0 76583 00:21:13.375 20:12:55 -- common/autotest_common.sh@941 -- # uname 00:21:13.375 20:12:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:13.375 20:12:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76583 00:21:13.375 killing process with pid 76583 00:21:13.375 Received shutdown signal, test time was about 2.000000 seconds 00:21:13.375 00:21:13.375 Latency(us) 00:21:13.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.375 =================================================================================================================== 00:21:13.375 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:13.375 20:12:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:13.376 20:12:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:13.376 20:12:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76583' 00:21:13.376 20:12:55 -- common/autotest_common.sh@955 -- # kill 76583 00:21:13.376 20:12:55 -- common/autotest_common.sh@960 -- # wait 76583 00:21:13.633 20:12:55 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:13.633 20:12:55 -- host/digest.sh@54 -- # local rw bs qd 00:21:13.633 20:12:55 -- host/digest.sh@56 -- # rw=randwrite 00:21:13.633 20:12:55 -- host/digest.sh@56 -- # bs=4096 00:21:13.633 20:12:55 -- host/digest.sh@56 -- # qd=128 00:21:13.633 20:12:55 -- host/digest.sh@58 -- # bperfpid=76639 00:21:13.633 20:12:55 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:13.633 20:12:55 -- host/digest.sh@60 -- # waitforlisten 76639 /var/tmp/bperf.sock 00:21:13.633 20:12:55 -- common/autotest_common.sh@817 -- # '[' -z 76639 ']' 00:21:13.633 20:12:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:13.633 20:12:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:13.633 20:12:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:13.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:13.633 20:12:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:13.633 20:12:55 -- common/autotest_common.sh@10 -- # set +x 00:21:13.633 [2024-04-24 20:12:55.770673] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:21:13.633 [2024-04-24 20:12:55.770859] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76639 ] 00:21:13.891 [2024-04-24 20:12:55.894426] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.891 [2024-04-24 20:12:55.998978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.458 20:12:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:14.458 20:12:56 -- common/autotest_common.sh@850 -- # return 0 00:21:14.458 20:12:56 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:14.458 20:12:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:14.716 20:12:56 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:14.716 20:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.716 20:12:56 -- common/autotest_common.sh@10 -- # set +x 00:21:14.716 20:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.716 20:12:56 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:14.716 20:12:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:14.975 nvme0n1 00:21:14.975 20:12:57 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:14.975 20:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.975 20:12:57 -- common/autotest_common.sh@10 -- # set +x 00:21:14.975 20:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.975 20:12:57 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:14.975 20:12:57 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:15.234 Running I/O for 2 seconds... 00:21:15.234 [2024-04-24 20:12:57.295872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fef90 00:21:15.234 [2024-04-24 20:12:57.298235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.234 [2024-04-24 20:12:57.298325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:15.234 [2024-04-24 20:12:57.309660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190feb58 00:21:15.234 [2024-04-24 20:12:57.311987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.234 [2024-04-24 20:12:57.312066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:15.234 [2024-04-24 20:12:57.323369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fe2e8 00:21:15.234 [2024-04-24 20:12:57.325615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.234 [2024-04-24 20:12:57.325688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:15.234 [2024-04-24 20:12:57.337514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fda78 00:21:15.234 [2024-04-24 20:12:57.339817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.234 [2024-04-24 20:12:57.339911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:15.234 [2024-04-24 20:12:57.351858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fd208 00:21:15.234 [2024-04-24 20:12:57.354027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.234 [2024-04-24 20:12:57.354100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:15.234 [2024-04-24 20:12:57.365895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fc998 00:21:15.234 [2024-04-24 20:12:57.368130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.234 [2024-04-24 20:12:57.368204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:15.234 [2024-04-24 20:12:57.379291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fc128 00:21:15.234 [2024-04-24 20:12:57.381458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.234 [2024-04-24 20:12:57.381545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:15.234 [2024-04-24 20:12:57.393309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fb8b8 00:21:15.234 [2024-04-24 20:12:57.395425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.234 [2024-04-24 20:12:57.395501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:15.234 [2024-04-24 20:12:57.406798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fb048 00:21:15.234 [2024-04-24 20:12:57.408951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.234 [2024-04-24 20:12:57.409022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:15.234 [2024-04-24 20:12:57.420246] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fa7d8 00:21:15.234 [2024-04-24 20:12:57.422195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.234 [2024-04-24 20:12:57.422252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:15.234 [2024-04-24 20:12:57.434269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f9f68 00:21:15.234 [2024-04-24 20:12:57.436537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.234 [2024-04-24 20:12:57.436566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:15.234 [2024-04-24 20:12:57.448291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f96f8 00:21:15.234 [2024-04-24 20:12:57.450248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.234 [2024-04-24 20:12:57.450274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:15.234 [2024-04-24 20:12:57.462016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f8e88 00:21:15.234 [2024-04-24 20:12:57.464070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.234 [2024-04-24 20:12:57.464110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:15.234 [2024-04-24 20:12:57.476207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f8618 00:21:15.234 [2024-04-24 20:12:57.478285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.234 [2024-04-24 20:12:57.478310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.490010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f7da8 00:21:15.493 [2024-04-24 20:12:57.492208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.493 [2024-04-24 20:12:57.492237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.504829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f7538 00:21:15.493 [2024-04-24 20:12:57.506918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.493 [2024-04-24 20:12:57.506950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.519755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f6cc8 00:21:15.493 [2024-04-24 20:12:57.521781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.493 [2024-04-24 20:12:57.521810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.533951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f6458 00:21:15.493 [2024-04-24 20:12:57.536004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.493 [2024-04-24 20:12:57.536035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.548550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f5be8 00:21:15.493 [2024-04-24 20:12:57.550591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.493 [2024-04-24 20:12:57.550623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.563559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f5378 00:21:15.493 [2024-04-24 20:12:57.565622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.493 [2024-04-24 20:12:57.565654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.578220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f4b08 00:21:15.493 [2024-04-24 20:12:57.580303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.493 [2024-04-24 20:12:57.580334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.592956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f4298 00:21:15.493 [2024-04-24 20:12:57.595047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.493 [2024-04-24 20:12:57.595079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.608118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f3a28 00:21:15.493 [2024-04-24 20:12:57.610192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.493 [2024-04-24 20:12:57.610225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.623327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f31b8 00:21:15.493 [2024-04-24 20:12:57.625388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.493 [2024-04-24 20:12:57.625425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.638589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f2948 00:21:15.493 [2024-04-24 20:12:57.640617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.493 [2024-04-24 20:12:57.640656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.654120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f20d8 00:21:15.493 [2024-04-24 20:12:57.656150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.493 [2024-04-24 20:12:57.656191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.669065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f1868 00:21:15.493 [2024-04-24 20:12:57.671009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.493 [2024-04-24 20:12:57.671046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.683484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f0ff8 00:21:15.493 [2024-04-24 20:12:57.685433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.493 [2024-04-24 20:12:57.685464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.697832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f0788 00:21:15.493 [2024-04-24 20:12:57.699730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.493 [2024-04-24 20:12:57.699776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.711808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190eff18 00:21:15.493 [2024-04-24 20:12:57.713607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.493 [2024-04-24 20:12:57.713638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:15.493 [2024-04-24 20:12:57.725572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190ef6a8 00:21:15.494 [2024-04-24 20:12:57.727481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.494 [2024-04-24 20:12:57.727515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:15.494 [2024-04-24 20:12:57.740704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190eee38 00:21:15.494 [2024-04-24 20:12:57.742597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.494 [2024-04-24 20:12:57.742633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:15.774 [2024-04-24 20:12:57.756075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190ee5c8 00:21:15.774 [2024-04-24 20:12:57.757938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.774 [2024-04-24 20:12:57.757973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:15.774 [2024-04-24 20:12:57.771258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190edd58 00:21:15.774 [2024-04-24 20:12:57.773133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.774 [2024-04-24 20:12:57.773167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:15.774 [2024-04-24 20:12:57.785475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190ed4e8 00:21:15.774 [2024-04-24 20:12:57.787160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.774 [2024-04-24 20:12:57.787194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:15.774 [2024-04-24 20:12:57.799294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190ecc78 00:21:15.774 [2024-04-24 20:12:57.801016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.774 [2024-04-24 20:12:57.801046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:15.774 [2024-04-24 20:12:57.813038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190ec408 00:21:15.775 [2024-04-24 20:12:57.814698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.775 [2024-04-24 20:12:57.814730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:15.775 [2024-04-24 20:12:57.827779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190ebb98 00:21:15.775 [2024-04-24 20:12:57.829536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.775 [2024-04-24 20:12:57.829571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:15.775 [2024-04-24 20:12:57.842687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190eb328 00:21:15.775 [2024-04-24 20:12:57.844437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.775 [2024-04-24 20:12:57.844475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:15.775 [2024-04-24 20:12:57.857189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190eaab8 00:21:15.775 [2024-04-24 20:12:57.858834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.775 [2024-04-24 20:12:57.858870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:15.775 [2024-04-24 20:12:57.870699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190ea248 00:21:15.775 [2024-04-24 20:12:57.872304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.775 [2024-04-24 20:12:57.872341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:15.775 [2024-04-24 20:12:57.883974] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e99d8 00:21:15.775 [2024-04-24 20:12:57.885373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.775 [2024-04-24 20:12:57.885415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:15.775 [2024-04-24 20:12:57.896713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e9168 00:21:15.775 [2024-04-24 20:12:57.898087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.775 [2024-04-24 20:12:57.898120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:15.775 [2024-04-24 20:12:57.909093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e88f8 00:21:15.775 [2024-04-24 20:12:57.910588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.775 [2024-04-24 20:12:57.910619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:15.775 [2024-04-24 20:12:57.922866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e8088 00:21:15.775 [2024-04-24 20:12:57.924396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.775 [2024-04-24 20:12:57.924428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:15.775 [2024-04-24 20:12:57.936372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e7818 00:21:15.775 [2024-04-24 20:12:57.937873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.775 [2024-04-24 20:12:57.937906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:15.775 [2024-04-24 20:12:57.949422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e6fa8 00:21:15.775 [2024-04-24 20:12:57.950822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.775 [2024-04-24 20:12:57.950851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:15.775 [2024-04-24 20:12:57.962369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e6738 00:21:15.775 [2024-04-24 20:12:57.963772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.775 [2024-04-24 20:12:57.963805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:15.775 [2024-04-24 20:12:57.975339] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e5ec8 00:21:15.775 [2024-04-24 20:12:57.976763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.775 [2024-04-24 20:12:57.976794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:15.775 [2024-04-24 20:12:57.989897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e5658 00:21:15.775 [2024-04-24 20:12:57.991341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.775 [2024-04-24 20:12:57.991386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:15.775 [2024-04-24 20:12:58.004646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e4de8 00:21:15.775 [2024-04-24 20:12:58.006087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.775 [2024-04-24 20:12:58.006121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:15.775 [2024-04-24 20:12:58.019232] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e4578 00:21:15.775 [2024-04-24 20:12:58.020724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:15.775 [2024-04-24 20:12:58.020764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:16.034 [2024-04-24 20:12:58.034005] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e3d08 00:21:16.034 [2024-04-24 20:12:58.035477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.034 [2024-04-24 20:12:58.035515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:16.034 [2024-04-24 20:12:58.048747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e3498 00:21:16.034 [2024-04-24 20:12:58.050103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.034 [2024-04-24 20:12:58.050137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:16.034 [2024-04-24 20:12:58.063064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e2c28 00:21:16.034 [2024-04-24 20:12:58.064497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.034 [2024-04-24 20:12:58.064530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:16.034 [2024-04-24 20:12:58.078144] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e23b8 00:21:16.034 [2024-04-24 20:12:58.079583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.034 [2024-04-24 20:12:58.079619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:16.034 [2024-04-24 20:12:58.092960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e1b48 00:21:16.034 [2024-04-24 20:12:58.094305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.034 [2024-04-24 20:12:58.094340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:16.034 [2024-04-24 20:12:58.106842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e12d8 00:21:16.034 [2024-04-24 20:12:58.108120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.034 [2024-04-24 20:12:58.108153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:16.034 [2024-04-24 20:12:58.121273] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e0a68 00:21:16.034 [2024-04-24 20:12:58.122544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.034 [2024-04-24 20:12:58.122576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:16.034 [2024-04-24 20:12:58.135343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e01f8 00:21:16.034 [2024-04-24 20:12:58.136601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.035 [2024-04-24 20:12:58.136634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:16.035 [2024-04-24 20:12:58.148577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190df988 00:21:16.035 [2024-04-24 20:12:58.149796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.035 [2024-04-24 20:12:58.149828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:16.035 [2024-04-24 20:12:58.161580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190df118 00:21:16.035 [2024-04-24 20:12:58.162738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.035 [2024-04-24 20:12:58.162767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:16.035 [2024-04-24 20:12:58.174982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190de8a8 00:21:16.035 [2024-04-24 20:12:58.176167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.035 [2024-04-24 20:12:58.176200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:16.035 [2024-04-24 20:12:58.189085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190de038 00:21:16.035 [2024-04-24 20:12:58.190339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.035 [2024-04-24 20:12:58.190390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:16.035 [2024-04-24 20:12:58.209329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190de038 00:21:16.035 [2024-04-24 20:12:58.211682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.035 [2024-04-24 20:12:58.211719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.035 [2024-04-24 20:12:58.224291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190de8a8 00:21:16.035 [2024-04-24 20:12:58.226765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.035 [2024-04-24 20:12:58.226800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:16.035 [2024-04-24 20:12:58.239202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190df118 00:21:16.035 [2024-04-24 20:12:58.241611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.035 [2024-04-24 20:12:58.241644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:16.035 [2024-04-24 20:12:58.253796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190df988 00:21:16.035 [2024-04-24 20:12:58.256167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.035 [2024-04-24 20:12:58.256199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:16.035 [2024-04-24 20:12:58.268323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e01f8 00:21:16.035 [2024-04-24 20:12:58.270678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.035 [2024-04-24 20:12:58.270714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:16.035 [2024-04-24 20:12:58.282894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e0a68 00:21:16.035 [2024-04-24 20:12:58.285256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.035 [2024-04-24 20:12:58.285295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:16.294 [2024-04-24 20:12:58.297892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e12d8 00:21:16.294 [2024-04-24 20:12:58.300394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.294 [2024-04-24 20:12:58.300436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:16.294 [2024-04-24 20:12:58.313555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e1b48 00:21:16.294 [2024-04-24 20:12:58.315933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.294 [2024-04-24 20:12:58.315969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:16.294 [2024-04-24 20:12:58.328972] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e23b8 00:21:16.294 [2024-04-24 20:12:58.331313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.294 [2024-04-24 20:12:58.331349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:16.294 [2024-04-24 20:12:58.344364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e2c28 00:21:16.294 [2024-04-24 20:12:58.346648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.294 [2024-04-24 20:12:58.346683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:16.294 [2024-04-24 20:12:58.359470] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e3498 00:21:16.294 [2024-04-24 20:12:58.361748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.294 [2024-04-24 20:12:58.361778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:16.294 [2024-04-24 20:12:58.374318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e3d08 00:21:16.294 [2024-04-24 20:12:58.376635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.294 [2024-04-24 20:12:58.376671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:16.294 [2024-04-24 20:12:58.389799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e4578 00:21:16.294 [2024-04-24 20:12:58.392058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.294 [2024-04-24 20:12:58.392090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:16.295 [2024-04-24 20:12:58.405287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e4de8 00:21:16.295 [2024-04-24 20:12:58.407534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.295 [2024-04-24 20:12:58.407570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:16.295 [2024-04-24 20:12:58.420497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e5658 00:21:16.295 [2024-04-24 20:12:58.422756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.295 [2024-04-24 20:12:58.422791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:16.295 [2024-04-24 20:12:58.435957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e5ec8 00:21:16.295 [2024-04-24 20:12:58.438116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.295 [2024-04-24 20:12:58.438148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:16.295 [2024-04-24 20:12:58.451034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e6738 00:21:16.295 [2024-04-24 20:12:58.453169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.295 [2024-04-24 20:12:58.453203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:16.295 [2024-04-24 20:12:58.466341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e6fa8 00:21:16.295 [2024-04-24 20:12:58.468510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.295 [2024-04-24 20:12:58.468546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:16.295 [2024-04-24 20:12:58.481717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e7818 00:21:16.295 [2024-04-24 20:12:58.483829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.295 [2024-04-24 20:12:58.483865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:16.295 [2024-04-24 20:12:58.496655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e8088 00:21:16.295 [2024-04-24 20:12:58.498769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.295 [2024-04-24 20:12:58.498798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:16.295 [2024-04-24 20:12:58.511328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e88f8 00:21:16.295 [2024-04-24 20:12:58.513372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.295 [2024-04-24 20:12:58.513408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:16.295 [2024-04-24 20:12:58.525459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e9168 00:21:16.295 [2024-04-24 20:12:58.527439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.295 [2024-04-24 20:12:58.527473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:16.295 [2024-04-24 20:12:58.540260] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190e99d8 00:21:16.295 [2024-04-24 20:12:58.542361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.295 [2024-04-24 20:12:58.542471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:16.554 [2024-04-24 20:12:58.555813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190ea248 00:21:16.554 [2024-04-24 20:12:58.557869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.554 [2024-04-24 20:12:58.557948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:16.554 [2024-04-24 20:12:58.570075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190eaab8 00:21:16.554 [2024-04-24 20:12:58.572127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.554 [2024-04-24 20:12:58.572209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:16.554 [2024-04-24 20:12:58.584464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190eb328 00:21:16.554 [2024-04-24 20:12:58.586513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.554 [2024-04-24 20:12:58.586605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:16.554 [2024-04-24 20:12:58.599333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190ebb98 00:21:16.554 [2024-04-24 20:12:58.601346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.554 [2024-04-24 20:12:58.601432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:16.554 [2024-04-24 20:12:58.613739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190ec408 00:21:16.554 [2024-04-24 20:12:58.615709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.554 [2024-04-24 20:12:58.615798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:16.554 [2024-04-24 20:12:58.627833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190ecc78 00:21:16.554 [2024-04-24 20:12:58.629674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.554 [2024-04-24 20:12:58.629751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:16.554 [2024-04-24 20:12:58.642111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190ed4e8 00:21:16.554 [2024-04-24 20:12:58.644098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.554 [2024-04-24 20:12:58.644185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:16.554 [2024-04-24 20:12:58.656661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190edd58 00:21:16.554 [2024-04-24 20:12:58.658533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.554 [2024-04-24 20:12:58.658624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:16.554 [2024-04-24 20:12:58.671054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190ee5c8 00:21:16.554 [2024-04-24 20:12:58.672926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.554 [2024-04-24 20:12:58.673008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:16.555 [2024-04-24 20:12:58.686041] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190eee38 00:21:16.555 [2024-04-24 20:12:58.687972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.555 [2024-04-24 20:12:58.688057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:16.555 [2024-04-24 20:12:58.700994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190ef6a8 00:21:16.555 [2024-04-24 20:12:58.702820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.555 [2024-04-24 20:12:58.702903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:16.555 [2024-04-24 20:12:58.715680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190eff18 00:21:16.555 [2024-04-24 20:12:58.717545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.555 [2024-04-24 20:12:58.717627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:16.555 [2024-04-24 20:12:58.730329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f0788 00:21:16.555 [2024-04-24 20:12:58.732193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.555 [2024-04-24 20:12:58.732274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:16.555 [2024-04-24 20:12:58.744734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f0ff8 00:21:16.555 [2024-04-24 20:12:58.746421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.555 [2024-04-24 20:12:58.746497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:16.555 [2024-04-24 20:12:58.759283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f1868 00:21:16.555 [2024-04-24 20:12:58.761103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.555 [2024-04-24 20:12:58.761184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:16.555 [2024-04-24 20:12:58.781806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f20d8 00:21:16.555 [2024-04-24 20:12:58.783641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.555 [2024-04-24 20:12:58.783913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:16.555 [2024-04-24 20:12:58.801157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f2948 00:21:16.555 [2024-04-24 20:12:58.802707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.555 [2024-04-24 20:12:58.802856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:16.815 [2024-04-24 20:12:58.815815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f31b8 00:21:16.815 [2024-04-24 20:12:58.817486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.815 [2024-04-24 20:12:58.817615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:16.815 [2024-04-24 20:12:58.829353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f3a28 00:21:16.815 [2024-04-24 20:12:58.830814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.815 [2024-04-24 20:12:58.830857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:16.815 [2024-04-24 20:12:58.842553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f4298 00:21:16.815 [2024-04-24 20:12:58.844026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.815 [2024-04-24 20:12:58.844065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:16.815 [2024-04-24 20:12:58.855635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f4b08 00:21:16.815 [2024-04-24 20:12:58.857123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.815 [2024-04-24 20:12:58.857159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:16.815 [2024-04-24 20:12:58.868359] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f5378 00:21:16.815 [2024-04-24 20:12:58.869793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.815 [2024-04-24 20:12:58.869842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:16.815 [2024-04-24 20:12:58.881306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f5be8 00:21:16.815 [2024-04-24 20:12:58.882704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.815 [2024-04-24 20:12:58.882754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:16.815 [2024-04-24 20:12:58.894163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f6458 00:21:16.815 [2024-04-24 20:12:58.895615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.815 [2024-04-24 20:12:58.895664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:16.815 [2024-04-24 20:12:58.907194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f6cc8 00:21:16.815 [2024-04-24 20:12:58.908515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.815 [2024-04-24 20:12:58.908559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:16.815 [2024-04-24 20:12:58.920004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f7538 00:21:16.815 [2024-04-24 20:12:58.921265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.815 [2024-04-24 20:12:58.921313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:16.815 [2024-04-24 20:12:58.932777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f7da8 00:21:16.815 [2024-04-24 20:12:58.934007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.815 [2024-04-24 20:12:58.934051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:16.815 [2024-04-24 20:12:58.945414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f8618 00:21:16.815 [2024-04-24 20:12:58.946647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.815 [2024-04-24 20:12:58.946690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:16.815 [2024-04-24 20:12:58.958081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f8e88 00:21:16.815 [2024-04-24 20:12:58.959458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.815 [2024-04-24 20:12:58.959508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:16.815 [2024-04-24 20:12:58.971211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f96f8 00:21:16.815 [2024-04-24 20:12:58.972580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.815 [2024-04-24 20:12:58.972629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:16.815 [2024-04-24 20:12:58.984586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f9f68 00:21:16.815 [2024-04-24 20:12:58.985792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.815 [2024-04-24 20:12:58.985838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:16.815 [2024-04-24 20:12:58.998045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fa7d8 00:21:16.815 [2024-04-24 20:12:58.999382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.816 [2024-04-24 20:12:58.999447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:16.816 [2024-04-24 20:12:59.011537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fb048 00:21:16.816 [2024-04-24 20:12:59.012794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.816 [2024-04-24 20:12:59.012839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:16.816 [2024-04-24 20:12:59.024871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fb8b8 00:21:16.816 [2024-04-24 20:12:59.026065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.816 [2024-04-24 20:12:59.026105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:16.816 [2024-04-24 20:12:59.037725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fc128 00:21:16.816 [2024-04-24 20:12:59.038884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.816 [2024-04-24 20:12:59.038923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:16.816 [2024-04-24 20:12:59.050828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fc998 00:21:16.816 [2024-04-24 20:12:59.052068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.816 [2024-04-24 20:12:59.052105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:16.816 [2024-04-24 20:12:59.064568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fd208 00:21:16.816 [2024-04-24 20:12:59.065867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:16.816 [2024-04-24 20:12:59.065925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:17.076 [2024-04-24 20:12:59.078341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fda78 00:21:17.076 [2024-04-24 20:12:59.079615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.076 [2024-04-24 20:12:59.079669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:17.076 [2024-04-24 20:12:59.091937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fe2e8 00:21:17.076 [2024-04-24 20:12:59.093078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.076 [2024-04-24 20:12:59.093132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:17.076 [2024-04-24 20:12:59.105458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190feb58 00:21:17.076 [2024-04-24 20:12:59.106577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.076 [2024-04-24 20:12:59.106632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:17.076 [2024-04-24 20:12:59.124421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fef90 00:21:17.076 [2024-04-24 20:12:59.126668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.076 [2024-04-24 20:12:59.126737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:17.076 [2024-04-24 20:12:59.137521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190feb58 00:21:17.076 [2024-04-24 20:12:59.139704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.076 [2024-04-24 20:12:59.139776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:17.076 [2024-04-24 20:12:59.150507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fe2e8 00:21:17.076 [2024-04-24 20:12:59.152638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.076 [2024-04-24 20:12:59.152695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:17.076 [2024-04-24 20:12:59.163415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fda78 00:21:17.076 [2024-04-24 20:12:59.165608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.076 [2024-04-24 20:12:59.165657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:17.076 [2024-04-24 20:12:59.176270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fd208 00:21:17.076 [2024-04-24 20:12:59.178319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.076 [2024-04-24 20:12:59.178356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:17.076 [2024-04-24 20:12:59.188773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fc998 00:21:17.076 [2024-04-24 20:12:59.190776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.076 [2024-04-24 20:12:59.190810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:17.076 [2024-04-24 20:12:59.201470] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fc128 00:21:17.076 [2024-04-24 20:12:59.203411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.077 [2024-04-24 20:12:59.203441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:17.077 [2024-04-24 20:12:59.213845] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fb8b8 00:21:17.077 [2024-04-24 20:12:59.215761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.077 [2024-04-24 20:12:59.215793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:17.077 [2024-04-24 20:12:59.226462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fb048 00:21:17.077 [2024-04-24 20:12:59.228379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.077 [2024-04-24 20:12:59.228419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:17.077 [2024-04-24 20:12:59.239045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190fa7d8 00:21:17.077 [2024-04-24 20:12:59.240982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.077 [2024-04-24 20:12:59.241013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:17.077 [2024-04-24 20:12:59.251949] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f9f68 00:21:17.077 [2024-04-24 20:12:59.254007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.077 [2024-04-24 20:12:59.254046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:17.077 [2024-04-24 20:12:59.265543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefe750) with pdu=0x2000190f96f8 00:21:17.077 [2024-04-24 20:12:59.267751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.077 [2024-04-24 20:12:59.267788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:17.077 00:21:17.077 Latency(us) 00:21:17.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.077 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:17.077 nvme0n1 : 2.01 17777.05 69.44 0.00 0.00 7195.00 6095.71 31136.75 00:21:17.077 =================================================================================================================== 00:21:17.077 Total : 17777.05 69.44 0.00 0.00 7195.00 6095.71 31136.75 00:21:17.077 0 00:21:17.077 20:12:59 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:17.077 20:12:59 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:17.077 | .driver_specific 00:21:17.077 | .nvme_error 00:21:17.077 | .status_code 00:21:17.077 | .command_transient_transport_error' 00:21:17.077 20:12:59 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:17.077 20:12:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:17.336 20:12:59 -- host/digest.sh@71 -- # (( 139 > 0 )) 00:21:17.336 20:12:59 -- host/digest.sh@73 -- # killprocess 76639 00:21:17.336 20:12:59 -- common/autotest_common.sh@936 -- # '[' -z 76639 ']' 00:21:17.336 20:12:59 -- common/autotest_common.sh@940 -- # kill -0 76639 00:21:17.336 20:12:59 -- common/autotest_common.sh@941 -- # uname 00:21:17.336 20:12:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:17.336 20:12:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76639 00:21:17.336 20:12:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:17.336 20:12:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:17.336 20:12:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76639' 00:21:17.336 killing process with pid 76639 00:21:17.336 20:12:59 -- common/autotest_common.sh@955 -- # kill 76639 00:21:17.336 Received shutdown signal, test time was about 2.000000 seconds 00:21:17.336 00:21:17.336 Latency(us) 00:21:17.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.336 =================================================================================================================== 00:21:17.336 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.336 20:12:59 -- common/autotest_common.sh@960 -- # wait 76639 00:21:17.906 20:12:59 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:17.906 20:12:59 -- host/digest.sh@54 -- # local rw bs qd 00:21:17.906 20:12:59 -- host/digest.sh@56 -- # rw=randwrite 00:21:17.906 20:12:59 -- host/digest.sh@56 -- # bs=131072 00:21:17.906 20:12:59 -- host/digest.sh@56 -- # qd=16 00:21:17.906 20:12:59 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:17.906 20:12:59 -- host/digest.sh@58 -- # bperfpid=76699 00:21:17.906 20:12:59 -- host/digest.sh@60 -- # waitforlisten 76699 /var/tmp/bperf.sock 00:21:17.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:17.906 20:12:59 -- common/autotest_common.sh@817 -- # '[' -z 76699 ']' 00:21:17.906 20:12:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:17.906 20:12:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:17.906 20:12:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:17.906 20:12:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:17.906 20:12:59 -- common/autotest_common.sh@10 -- # set +x 00:21:17.906 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:17.906 Zero copy mechanism will not be used. 00:21:17.906 [2024-04-24 20:12:59.947978] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:21:17.906 [2024-04-24 20:12:59.948081] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76699 ] 00:21:17.906 [2024-04-24 20:13:00.087069] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.166 [2024-04-24 20:13:00.241121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.732 20:13:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:18.732 20:13:00 -- common/autotest_common.sh@850 -- # return 0 00:21:18.732 20:13:00 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:18.732 20:13:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:18.991 20:13:01 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:18.991 20:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.991 20:13:01 -- common/autotest_common.sh@10 -- # set +x 00:21:18.991 20:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.991 20:13:01 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:18.991 20:13:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:19.250 nvme0n1 00:21:19.250 20:13:01 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:19.250 20:13:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.250 20:13:01 -- common/autotest_common.sh@10 -- # set +x 00:21:19.250 20:13:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.250 20:13:01 -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:19.250 20:13:01 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:19.250 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:19.250 Zero copy mechanism will not be used. 00:21:19.250 Running I/O for 2 seconds... 00:21:19.250 [2024-04-24 20:13:01.485141] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.250 [2024-04-24 20:13:01.485758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-04-24 20:13:01.485820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.250 [2024-04-24 20:13:01.490965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.250 [2024-04-24 20:13:01.491534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-04-24 20:13:01.491576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.250 [2024-04-24 20:13:01.496713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.250 [2024-04-24 20:13:01.497268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-04-24 20:13:01.497306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.250 [2024-04-24 20:13:01.502437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.250 [2024-04-24 20:13:01.503000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-04-24 20:13:01.503038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.514 [2024-04-24 20:13:01.508197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.514 [2024-04-24 20:13:01.508738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.514 [2024-04-24 20:13:01.508786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.514 [2024-04-24 20:13:01.513823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.514 [2024-04-24 20:13:01.514323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.514 [2024-04-24 20:13:01.514360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.514 [2024-04-24 20:13:01.519360] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.514 [2024-04-24 20:13:01.519899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.514 [2024-04-24 20:13:01.519937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.514 [2024-04-24 20:13:01.524784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.514 [2024-04-24 20:13:01.525307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.514 [2024-04-24 20:13:01.525335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.514 [2024-04-24 20:13:01.530206] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.514 [2024-04-24 20:13:01.530762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.514 [2024-04-24 20:13:01.530801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.514 [2024-04-24 20:13:01.535708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.514 [2024-04-24 20:13:01.536219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.514 [2024-04-24 20:13:01.536267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.514 [2024-04-24 20:13:01.541061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.514 [2024-04-24 20:13:01.541582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.514 [2024-04-24 20:13:01.541611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.514 [2024-04-24 20:13:01.546437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.514 [2024-04-24 20:13:01.546967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.514 [2024-04-24 20:13:01.547024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.514 [2024-04-24 20:13:01.551900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.514 [2024-04-24 20:13:01.552420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.514 [2024-04-24 20:13:01.552468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.514 [2024-04-24 20:13:01.557340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.514 [2024-04-24 20:13:01.557865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.514 [2024-04-24 20:13:01.557901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.514 [2024-04-24 20:13:01.562719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.514 [2024-04-24 20:13:01.563253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.514 [2024-04-24 20:13:01.563304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.514 [2024-04-24 20:13:01.568245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.514 [2024-04-24 20:13:01.568804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.514 [2024-04-24 20:13:01.568843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.514 [2024-04-24 20:13:01.573594] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.514 [2024-04-24 20:13:01.573650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.514 [2024-04-24 20:13:01.573671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.514 [2024-04-24 20:13:01.579074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.514 [2024-04-24 20:13:01.579130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.514 [2024-04-24 20:13:01.579152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.584502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.584587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.584607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.589853] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.589914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.589936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.595202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.595264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.595287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.600654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.600708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.600731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.605961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.606026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.606049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.611541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.611603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.611628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.616870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.616933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.616956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.622619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.622684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.622725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.628381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.628457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.628496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.634073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.634129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.634167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.639809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.639870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.639896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.645439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.645497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.645521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.650957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.651020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.651044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.656734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.656794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.656818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.662395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.662455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.662482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.667992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.668058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.668091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.673484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.673544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.673578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.678953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.679017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.679048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.684314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.684395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.684427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.689816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.689903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.689927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.695315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.695388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.695431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.700751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.700837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.700864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.706110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.706265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.706292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.711500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.711585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.711611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.717195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.717341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.717387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.722526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.722626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.722650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.728279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.515 [2024-04-24 20:13:01.728594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.515 [2024-04-24 20:13:01.728630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.515 [2024-04-24 20:13:01.733905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.516 [2024-04-24 20:13:01.734118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.516 [2024-04-24 20:13:01.734142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.516 [2024-04-24 20:13:01.739458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.516 [2024-04-24 20:13:01.739538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.516 [2024-04-24 20:13:01.739561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.516 [2024-04-24 20:13:01.744934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.516 [2024-04-24 20:13:01.745038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.516 [2024-04-24 20:13:01.745063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.516 [2024-04-24 20:13:01.750175] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.516 [2024-04-24 20:13:01.750296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.516 [2024-04-24 20:13:01.750323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.516 [2024-04-24 20:13:01.755670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.516 [2024-04-24 20:13:01.755770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.516 [2024-04-24 20:13:01.755805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.516 [2024-04-24 20:13:01.761223] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.516 [2024-04-24 20:13:01.761322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.516 [2024-04-24 20:13:01.761354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.780 [2024-04-24 20:13:01.767187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.767310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.767342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.772965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.773027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.773058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.778702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.778868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.778906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.784601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.784765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.784794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.790414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.790489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.790540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.796063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.796166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.796193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.801721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.801787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.801815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.807203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.807306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.807338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.812973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.813068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.813095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.818641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.818766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.818794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.824513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.824580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.824610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.830520] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.830651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.830687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.836562] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.836702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.836746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.842721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.842804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.842840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.848807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.848906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.848933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.854855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.854991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.855018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.861181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.861270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.861297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.867506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.867589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.867616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.873220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.873325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.873349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.879010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.879108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.879136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.884872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.884988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.885013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.890390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.890515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.890564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.895906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.895971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.895993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.901373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.901506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.901530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.906963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.907092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.907118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.912798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.913000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.913028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.918397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.781 [2024-04-24 20:13:01.918591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.781 [2024-04-24 20:13:01.918618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.781 [2024-04-24 20:13:01.924186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:01.924406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:01.924435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:01.929960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:01.930137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:01.930162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:01.935540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:01.935619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:01.935647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:01.941001] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:01.941140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:01.941175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:01.946538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:01.946628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:01.946652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:01.952236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:01.952407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:01.952431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:01.957762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:01.957825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:01.957847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:01.963179] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:01.963292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:01.963321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:01.968949] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:01.969103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:01.969131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:01.974598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:01.974692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:01.974716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:01.980190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:01.980317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:01.980341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:01.986139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:01.986283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:01.986306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:01.991992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:01.992196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:01.992218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:01.997463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:01.997692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:01.997714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:02.002917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:02.003121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:02.003143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:02.008471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:02.008726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:02.008746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:02.013971] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:02.014191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:02.014212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:02.019442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:02.019704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:02.019742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:02.025179] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:02.025481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:02.025502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:19.782 [2024-04-24 20:13:02.030786] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:19.782 [2024-04-24 20:13:02.031022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.782 [2024-04-24 20:13:02.031043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.041 [2024-04-24 20:13:02.036439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.036717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.036750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.042012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.042255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.042280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.047338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.047550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.047581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.052906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.053133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.053163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.058347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.058681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.058715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.064125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.064397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.064425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.070184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.070478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.070516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.075779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.075988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.076014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.080879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.081286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.081318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.086202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.086285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.086307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.091983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.092046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.092069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.097679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.097737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.097760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.103380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.103451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.103474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.108954] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.109007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.109029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.114725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.114789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.114813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.120551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.120613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.120637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.126224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.126281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.126302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.132014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.132074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.132095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.137739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.137795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.137815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.143845] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.042 [2024-04-24 20:13:02.143911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.042 [2024-04-24 20:13:02.143933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.042 [2024-04-24 20:13:02.149662] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.149724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.149747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.155385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.155468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.155490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.161174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.161245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.161265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.166916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.166993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.167021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.172643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.172710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.172741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.178320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.178395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.178429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.184115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.184180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.184215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.189938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.190041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.190072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.195734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.195800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.195831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.201343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.201439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.201469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.207095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.207159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.207186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.212871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.212939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.212965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.218855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.218916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.218940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.224902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.224975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.224997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.230939] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.231004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.231029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.236832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.236900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.236921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.242788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.242852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.242875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.248301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.248405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.248430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.253947] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.254041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.254061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.259446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.259515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.259535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.264938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.265053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.265075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.270641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.270702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.270724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.276344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.276482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.276504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.282259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.282347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.282371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.043 [2024-04-24 20:13:02.288027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.043 [2024-04-24 20:13:02.288108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.043 [2024-04-24 20:13:02.288130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.294154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.294237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.294259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.300175] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.300264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.300286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.305961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.306043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.306065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.311702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.311781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.311803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.317421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.317568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.317590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.323503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.323620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.323641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.329666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.329878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.329900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.335570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.335651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.335673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.341478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.341573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.341592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.347180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.347262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.347283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.353000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.353065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.353088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.358661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.358787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.358810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.364348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.364434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.364456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.369993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.370095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.370117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.375719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.375829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.375859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.381398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.381514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.381536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.386813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.304 [2024-04-24 20:13:02.386940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.304 [2024-04-24 20:13:02.386963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.304 [2024-04-24 20:13:02.392411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.392576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.392600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.397813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.398082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.398106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.403501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.403737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.403760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.408416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.408850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.408880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.413768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.413819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.413840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.419187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.419238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.419260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.424974] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.425030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.425052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.430772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.430840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.430863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.436307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.436362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.436398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.441806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.441889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.441910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.447188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.447241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.447262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.452741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.452810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.452831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.458276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.458329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.458349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.463964] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.464022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.464045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.469435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.469513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.469535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.475007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.475076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.475099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.480555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.480614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.480635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.485869] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.485920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.485941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.491186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.491236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.491259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.496622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.496679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.496700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.501951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.502004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.502024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.507547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.507607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.507628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.513307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.513387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.513410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.519138] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.519193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.519215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.524945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.525026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.525047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.530563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.530616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.530639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.536146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.536201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.305 [2024-04-24 20:13:02.536223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.305 [2024-04-24 20:13:02.541747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.305 [2024-04-24 20:13:02.541801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.306 [2024-04-24 20:13:02.541823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.306 [2024-04-24 20:13:02.547258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.306 [2024-04-24 20:13:02.547331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.306 [2024-04-24 20:13:02.547354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.306 [2024-04-24 20:13:02.552673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.306 [2024-04-24 20:13:02.552746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.306 [2024-04-24 20:13:02.552768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.558457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.558581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.558604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.564169] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.564261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.564281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.569714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.569872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.569894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.575276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.575381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.575424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.580830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.580889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.580913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.586641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.586706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.586727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.592415] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.592506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.592527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.598063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.598139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.598161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.603727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.603813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.603834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.609324] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.609463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.609487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.614892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.614991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.615012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.620411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.620504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.620526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.626010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.626087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.626109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.631627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.631716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.631739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.637219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.637330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.637353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.642945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.643063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.643087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.648543] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.648741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.648762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.654057] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.654126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.654149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.659738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.659806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.659828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.665515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.665596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.665619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.671184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.671266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.671287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.676785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.676856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.676879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.682233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.682534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.682564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.687766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.687908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.687935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.693189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.693328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.693351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.698781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.565 [2024-04-24 20:13:02.698919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.565 [2024-04-24 20:13:02.698948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.565 [2024-04-24 20:13:02.704503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.704579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.704608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.710130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.710210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.710235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.715736] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.715820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.715844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.721254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.721334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.721358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.726738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.726812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.726836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.732129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.732210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.732234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.737580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.737668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.737690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.743369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.743446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.743472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.749049] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.749253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.749273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.754725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.754819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.754843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.760245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.760327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.760351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.765850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.766032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.766059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.771301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.771461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.771484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.776852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.776939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.776960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.782335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.782456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.782483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.787932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.788007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.788031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.793609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.793691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.793711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.799236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.799317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.799339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.804884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.804972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.804994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.810429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.810592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.810614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.566 [2024-04-24 20:13:02.816205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.566 [2024-04-24 20:13:02.816318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.566 [2024-04-24 20:13:02.816338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.821889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.822030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.822055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.827685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.827826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.827853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.833362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.833457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.833482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.838928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.839023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.839044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.844622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.844799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.844821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.850150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.850341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.850363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.856004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.856088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.856109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.861700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.861869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.861891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.867469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.867658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.867681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.873249] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.873326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.873348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.879162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.879370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.879408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.885099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.885390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.885463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.890893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.891077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.891099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.896529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.896665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.896689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.902054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.902265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.902295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.907785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.907995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.908015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.913470] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.913711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.913735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.919427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.919683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.919705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.924621] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.925061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.925088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.930149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.930226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.930248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.936024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.936079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.936103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.941728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.941783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.941804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.947460] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.947518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.947544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.826 [2024-04-24 20:13:02.953101] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.826 [2024-04-24 20:13:02.953167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.826 [2024-04-24 20:13:02.953188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:02.958762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:02.958820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:02.958842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:02.964226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:02.964281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:02.964304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:02.969708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:02.969761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:02.969783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:02.975268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:02.975334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:02.975360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:02.980729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:02.980780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:02.980802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:02.986106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:02.986165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:02.986187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:02.991863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:02.991919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:02.991941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:02.997327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:02.997402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:02.997442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:03.002838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:03.002928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:03.002951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:03.008492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:03.008635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:03.008660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:03.014080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:03.014168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:03.014192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:03.019755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:03.019864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:03.019901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:03.025338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:03.025420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:03.025444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:03.031094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:03.031174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:03.031199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:03.036983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:03.037060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:03.037085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:03.042737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:03.042789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:03.042816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:03.048473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:03.048664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:03.048689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:03.054016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:03.054092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:03.054118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:03.059668] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:03.059836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:03.059860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:03.065215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:03.065291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:03.065314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:03.070835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:03.070914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:03.070940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:20.827 [2024-04-24 20:13:03.076641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:20.827 [2024-04-24 20:13:03.076883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.827 [2024-04-24 20:13:03.076918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.082058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.082194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.082225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.087595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.087692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.087718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.093243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.093333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.093355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.098760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.098878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.098903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.104073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.104265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.104291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.109437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.109660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.109687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.114097] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.114505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.114568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.119476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.120031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.120065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.124882] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.125416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.125448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.130433] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.130986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.131019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.135562] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.135625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.135662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.141075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.141135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.141155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.146866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.146916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.146937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.152327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.152425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.152445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.157881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.157931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.157951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.163716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.163788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.163812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.169287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.169351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.169371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.086 [2024-04-24 20:13:03.174824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.086 [2024-04-24 20:13:03.174874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.086 [2024-04-24 20:13:03.174896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.180402] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.180456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.180506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.186109] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.186177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.186199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.191900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.191954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.191975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.197466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.197514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.197536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.203264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.203321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.203343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.209009] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.209075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.209096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.214800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.214865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.214887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.220444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.220509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.220531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.226051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.226105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.226125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.231493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.231549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.231571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.237087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.237147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.237171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.242810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.242892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.242918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.248443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.248534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.248560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.253894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.253982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.254010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.259428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.259535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.259566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.264930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.265156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.265191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.270212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.270318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.270341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.275658] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.275742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.275766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.281112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.281291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.281312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.286698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.286772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.286796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.292322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.292500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.292526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.297834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.298000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.298023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.303508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.303596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.303621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.309004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.309084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.309106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.314452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.314635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.314660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.319883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.319945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.087 [2024-04-24 20:13:03.319966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.087 [2024-04-24 20:13:03.325411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.087 [2024-04-24 20:13:03.325500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.088 [2024-04-24 20:13:03.325524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.088 [2024-04-24 20:13:03.330851] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.088 [2024-04-24 20:13:03.330922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.088 [2024-04-24 20:13:03.330945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.088 [2024-04-24 20:13:03.336598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.088 [2024-04-24 20:13:03.336659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.088 [2024-04-24 20:13:03.336680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.346 [2024-04-24 20:13:03.342185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.346 [2024-04-24 20:13:03.342356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.342390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.347665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.347751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.347772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.353136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.353288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.353315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.358863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.358960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.358985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.364440] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.364520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.364540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.370095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.370236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.370256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.375760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.375850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.375870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.381520] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.381687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.381708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.387436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.387625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.387645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.393391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.393467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.393489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.399133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.399282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.399303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.404862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.404975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.404997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.410553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.410690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.410710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.416309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.416417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.416440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.421928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.421997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.422017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.427601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.427718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.427739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.433237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.433333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.433354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.438896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.438958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.438980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.444676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.444773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.444796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.450145] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.450228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.450251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.455712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.455841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.455871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.461163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.461290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.461324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.347 [2024-04-24 20:13:03.466488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xefea90) with pdu=0x2000190fef90 00:21:21.347 [2024-04-24 20:13:03.466626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.347 [2024-04-24 20:13:03.466661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:21.347 00:21:21.347 Latency(us) 00:21:21.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.347 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:21.347 nvme0n1 : 2.00 5475.48 684.43 0.00 0.00 2916.78 1874.50 11847.99 00:21:21.347 =================================================================================================================== 00:21:21.347 Total : 5475.48 684.43 0.00 0.00 2916.78 1874.50 11847.99 00:21:21.347 0 00:21:21.347 20:13:03 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:21.347 20:13:03 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:21.347 20:13:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:21.347 20:13:03 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:21.347 | .driver_specific 00:21:21.347 | .nvme_error 00:21:21.347 | .status_code 00:21:21.347 | .command_transient_transport_error' 00:21:21.606 20:13:03 -- host/digest.sh@71 -- # (( 353 > 0 )) 00:21:21.606 20:13:03 -- host/digest.sh@73 -- # killprocess 76699 00:21:21.606 20:13:03 -- common/autotest_common.sh@936 -- # '[' -z 76699 ']' 00:21:21.606 20:13:03 -- common/autotest_common.sh@940 -- # kill -0 76699 00:21:21.606 20:13:03 -- common/autotest_common.sh@941 -- # uname 00:21:21.606 20:13:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:21.606 20:13:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76699 00:21:21.606 killing process with pid 76699 00:21:21.606 20:13:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:21.606 20:13:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:21.606 20:13:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76699' 00:21:21.606 Received shutdown signal, test time was about 2.000000 seconds 00:21:21.606 00:21:21.606 Latency(us) 00:21:21.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.606 =================================================================================================================== 00:21:21.606 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.606 20:13:03 -- common/autotest_common.sh@955 -- # kill 76699 00:21:21.606 20:13:03 -- common/autotest_common.sh@960 -- # wait 76699 00:21:21.865 20:13:04 -- host/digest.sh@116 -- # killprocess 76492 00:21:21.865 20:13:04 -- common/autotest_common.sh@936 -- # '[' -z 76492 ']' 00:21:21.865 20:13:04 -- common/autotest_common.sh@940 -- # kill -0 76492 00:21:21.865 20:13:04 -- common/autotest_common.sh@941 -- # uname 00:21:21.865 20:13:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:21.865 20:13:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76492 00:21:22.124 20:13:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:22.124 20:13:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:22.124 killing process with pid 76492 00:21:22.124 20:13:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76492' 00:21:22.124 20:13:04 -- common/autotest_common.sh@955 -- # kill 76492 00:21:22.124 [2024-04-24 20:13:04.134121] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:22.124 20:13:04 -- common/autotest_common.sh@960 -- # wait 76492 00:21:22.124 00:21:22.124 real 0m17.783s 00:21:22.124 user 0m33.586s 00:21:22.124 sys 0m4.812s 00:21:22.124 20:13:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:22.124 20:13:04 -- common/autotest_common.sh@10 -- # set +x 00:21:22.124 ************************************ 00:21:22.124 END TEST nvmf_digest_error 00:21:22.124 ************************************ 00:21:22.384 20:13:04 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:22.384 20:13:04 -- host/digest.sh@150 -- # nvmftestfini 00:21:22.384 20:13:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:22.384 20:13:04 -- nvmf/common.sh@117 -- # sync 00:21:22.384 20:13:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:22.384 20:13:04 -- nvmf/common.sh@120 -- # set +e 00:21:22.384 20:13:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:22.384 20:13:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:22.384 rmmod nvme_tcp 00:21:22.384 rmmod nvme_fabrics 00:21:22.384 rmmod nvme_keyring 00:21:22.384 20:13:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:22.384 20:13:04 -- nvmf/common.sh@124 -- # set -e 00:21:22.384 20:13:04 -- nvmf/common.sh@125 -- # return 0 00:21:22.384 20:13:04 -- nvmf/common.sh@478 -- # '[' -n 76492 ']' 00:21:22.384 20:13:04 -- nvmf/common.sh@479 -- # killprocess 76492 00:21:22.384 20:13:04 -- common/autotest_common.sh@936 -- # '[' -z 76492 ']' 00:21:22.384 20:13:04 -- common/autotest_common.sh@940 -- # kill -0 76492 00:21:22.384 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (76492) - No such process 00:21:22.384 Process with pid 76492 is not found 00:21:22.384 20:13:04 -- common/autotest_common.sh@963 -- # echo 'Process with pid 76492 is not found' 00:21:22.384 20:13:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:22.384 20:13:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:22.384 20:13:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:22.384 20:13:04 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:22.384 20:13:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:22.384 20:13:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.384 20:13:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.384 20:13:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.384 20:13:04 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:22.384 00:21:22.384 real 0m36.413s 00:21:22.384 user 1m7.526s 00:21:22.384 sys 0m9.722s 00:21:22.384 20:13:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:22.384 20:13:04 -- common/autotest_common.sh@10 -- # set +x 00:21:22.384 ************************************ 00:21:22.384 END TEST nvmf_digest 00:21:22.384 ************************************ 00:21:22.384 20:13:04 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:21:22.384 20:13:04 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:21:22.384 20:13:04 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:22.384 20:13:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:22.384 20:13:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:22.384 20:13:04 -- common/autotest_common.sh@10 -- # set +x 00:21:22.644 ************************************ 00:21:22.644 START TEST nvmf_multipath 00:21:22.644 ************************************ 00:21:22.644 20:13:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:22.644 * Looking for test storage... 00:21:22.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:22.644 20:13:04 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:22.644 20:13:04 -- nvmf/common.sh@7 -- # uname -s 00:21:22.644 20:13:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.644 20:13:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.644 20:13:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.644 20:13:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.644 20:13:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.644 20:13:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.644 20:13:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.644 20:13:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.644 20:13:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.644 20:13:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.644 20:13:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:21:22.644 20:13:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:21:22.644 20:13:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.644 20:13:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.644 20:13:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:22.644 20:13:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.644 20:13:04 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:22.644 20:13:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.644 20:13:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.644 20:13:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.644 20:13:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.644 20:13:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.644 20:13:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.644 20:13:04 -- paths/export.sh@5 -- # export PATH 00:21:22.644 20:13:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.644 20:13:04 -- nvmf/common.sh@47 -- # : 0 00:21:22.644 20:13:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:22.644 20:13:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:22.644 20:13:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.644 20:13:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.644 20:13:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.644 20:13:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:22.644 20:13:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:22.644 20:13:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:22.644 20:13:04 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:22.644 20:13:04 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:22.644 20:13:04 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:22.644 20:13:04 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:22.644 20:13:04 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:22.644 20:13:04 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:22.644 20:13:04 -- host/multipath.sh@30 -- # nvmftestinit 00:21:22.644 20:13:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:22.644 20:13:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.644 20:13:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:22.644 20:13:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:22.644 20:13:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:22.644 20:13:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.644 20:13:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.644 20:13:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.644 20:13:04 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:22.644 20:13:04 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:22.644 20:13:04 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:22.644 20:13:04 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:22.644 20:13:04 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:22.644 20:13:04 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:22.644 20:13:04 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.644 20:13:04 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.644 20:13:04 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:22.644 20:13:04 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:22.644 20:13:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:22.644 20:13:04 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:22.644 20:13:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:22.644 20:13:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.644 20:13:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:22.644 20:13:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:22.644 20:13:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:22.644 20:13:04 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:22.644 20:13:04 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:22.644 20:13:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:22.644 Cannot find device "nvmf_tgt_br" 00:21:22.644 20:13:04 -- nvmf/common.sh@155 -- # true 00:21:22.644 20:13:04 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:22.904 Cannot find device "nvmf_tgt_br2" 00:21:22.904 20:13:04 -- nvmf/common.sh@156 -- # true 00:21:22.904 20:13:04 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:22.904 20:13:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:22.904 Cannot find device "nvmf_tgt_br" 00:21:22.904 20:13:04 -- nvmf/common.sh@158 -- # true 00:21:22.904 20:13:04 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:22.904 Cannot find device "nvmf_tgt_br2" 00:21:22.904 20:13:04 -- nvmf/common.sh@159 -- # true 00:21:22.904 20:13:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:22.904 20:13:04 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:22.904 20:13:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:22.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:22.904 20:13:05 -- nvmf/common.sh@162 -- # true 00:21:22.904 20:13:05 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:22.904 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:22.904 20:13:05 -- nvmf/common.sh@163 -- # true 00:21:22.904 20:13:05 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:22.904 20:13:05 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:22.904 20:13:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:22.904 20:13:05 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:22.904 20:13:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:22.904 20:13:05 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:22.904 20:13:05 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:22.904 20:13:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:22.904 20:13:05 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:22.904 20:13:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:22.904 20:13:05 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:22.904 20:13:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:22.904 20:13:05 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:22.904 20:13:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:22.904 20:13:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:22.904 20:13:05 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:22.904 20:13:05 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:22.904 20:13:05 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:23.164 20:13:05 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:23.164 20:13:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:23.164 20:13:05 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:23.164 20:13:05 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:23.164 20:13:05 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:23.164 20:13:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:23.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:21:23.164 00:21:23.164 --- 10.0.0.2 ping statistics --- 00:21:23.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.164 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:21:23.164 20:13:05 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:23.164 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:23.164 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:21:23.164 00:21:23.164 --- 10.0.0.3 ping statistics --- 00:21:23.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.164 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:21:23.164 20:13:05 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:23.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:21:23.164 00:21:23.164 --- 10.0.0.1 ping statistics --- 00:21:23.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.164 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:21:23.164 20:13:05 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.164 20:13:05 -- nvmf/common.sh@422 -- # return 0 00:21:23.164 20:13:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:23.164 20:13:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.164 20:13:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:23.164 20:13:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:23.164 20:13:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.164 20:13:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:23.164 20:13:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:23.164 20:13:05 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:23.164 20:13:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:23.164 20:13:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:23.164 20:13:05 -- common/autotest_common.sh@10 -- # set +x 00:21:23.164 20:13:05 -- nvmf/common.sh@470 -- # nvmfpid=76977 00:21:23.164 20:13:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:23.164 20:13:05 -- nvmf/common.sh@471 -- # waitforlisten 76977 00:21:23.164 20:13:05 -- common/autotest_common.sh@817 -- # '[' -z 76977 ']' 00:21:23.164 20:13:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.164 20:13:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:23.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.164 20:13:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.164 20:13:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:23.164 20:13:05 -- common/autotest_common.sh@10 -- # set +x 00:21:23.164 [2024-04-24 20:13:05.321229] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:21:23.164 [2024-04-24 20:13:05.321294] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.424 [2024-04-24 20:13:05.458296] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:23.424 [2024-04-24 20:13:05.628507] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.424 [2024-04-24 20:13:05.628568] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.424 [2024-04-24 20:13:05.628575] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.424 [2024-04-24 20:13:05.628580] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.424 [2024-04-24 20:13:05.628585] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.424 [2024-04-24 20:13:05.628739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.424 [2024-04-24 20:13:05.628744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.994 20:13:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:23.994 20:13:06 -- common/autotest_common.sh@850 -- # return 0 00:21:23.994 20:13:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:23.994 20:13:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:23.994 20:13:06 -- common/autotest_common.sh@10 -- # set +x 00:21:23.994 20:13:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.994 20:13:06 -- host/multipath.sh@33 -- # nvmfapp_pid=76977 00:21:23.994 20:13:06 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:24.254 [2024-04-24 20:13:06.454231] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.254 20:13:06 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:24.513 Malloc0 00:21:24.513 20:13:06 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:24.773 20:13:06 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:25.032 20:13:07 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:25.291 [2024-04-24 20:13:07.314824] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:25.291 [2024-04-24 20:13:07.315077] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.291 20:13:07 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:25.291 [2024-04-24 20:13:07.526781] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:25.551 20:13:07 -- host/multipath.sh@44 -- # bdevperf_pid=77027 00:21:25.551 20:13:07 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:25.551 20:13:07 -- host/multipath.sh@47 -- # waitforlisten 77027 /var/tmp/bdevperf.sock 00:21:25.551 20:13:07 -- common/autotest_common.sh@817 -- # '[' -z 77027 ']' 00:21:25.551 20:13:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.551 20:13:07 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:25.551 20:13:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:25.551 20:13:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.551 20:13:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:25.551 20:13:07 -- common/autotest_common.sh@10 -- # set +x 00:21:26.486 20:13:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:26.486 20:13:08 -- common/autotest_common.sh@850 -- # return 0 00:21:26.486 20:13:08 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:26.486 20:13:08 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:26.746 Nvme0n1 00:21:26.746 20:13:08 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:27.006 Nvme0n1 00:21:27.006 20:13:09 -- host/multipath.sh@78 -- # sleep 1 00:21:27.006 20:13:09 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:28.382 20:13:10 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:28.382 20:13:10 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:28.382 20:13:10 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:28.642 20:13:10 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:28.642 20:13:10 -- host/multipath.sh@65 -- # dtrace_pid=77068 00:21:28.642 20:13:10 -- host/multipath.sh@66 -- # sleep 6 00:21:28.642 20:13:10 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 76977 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:35.272 20:13:16 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:35.272 20:13:16 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:35.272 20:13:16 -- host/multipath.sh@67 -- # active_port=4421 00:21:35.272 20:13:16 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:35.272 Attaching 4 probes... 00:21:35.272 @path[10.0.0.2, 4421]: 19114 00:21:35.272 @path[10.0.0.2, 4421]: 18711 00:21:35.272 @path[10.0.0.2, 4421]: 17526 00:21:35.272 @path[10.0.0.2, 4421]: 18231 00:21:35.272 @path[10.0.0.2, 4421]: 18511 00:21:35.272 20:13:16 -- host/multipath.sh@69 -- # sed -n 1p 00:21:35.272 20:13:16 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:35.272 20:13:16 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:35.272 20:13:16 -- host/multipath.sh@69 -- # port=4421 00:21:35.272 20:13:16 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:35.273 20:13:16 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:35.273 20:13:16 -- host/multipath.sh@72 -- # kill 77068 00:21:35.273 20:13:16 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:35.273 20:13:16 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:35.273 20:13:16 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:35.273 20:13:17 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:35.273 20:13:17 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:35.273 20:13:17 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 76977 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:35.273 20:13:17 -- host/multipath.sh@65 -- # dtrace_pid=77186 00:21:35.273 20:13:17 -- host/multipath.sh@66 -- # sleep 6 00:21:41.862 20:13:23 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:41.862 20:13:23 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:41.862 20:13:23 -- host/multipath.sh@67 -- # active_port=4420 00:21:41.862 20:13:23 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:41.862 Attaching 4 probes... 00:21:41.862 @path[10.0.0.2, 4420]: 19144 00:21:41.862 @path[10.0.0.2, 4420]: 19297 00:21:41.862 @path[10.0.0.2, 4420]: 18273 00:21:41.862 @path[10.0.0.2, 4420]: 19333 00:21:41.862 @path[10.0.0.2, 4420]: 19406 00:21:41.862 20:13:23 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:41.862 20:13:23 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:41.862 20:13:23 -- host/multipath.sh@69 -- # sed -n 1p 00:21:41.862 20:13:23 -- host/multipath.sh@69 -- # port=4420 00:21:41.862 20:13:23 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:41.862 20:13:23 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:41.862 20:13:23 -- host/multipath.sh@72 -- # kill 77186 00:21:41.862 20:13:23 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:41.862 20:13:23 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:41.862 20:13:23 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:41.862 20:13:23 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:41.862 20:13:24 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:41.862 20:13:24 -- host/multipath.sh@65 -- # dtrace_pid=77298 00:21:41.862 20:13:24 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 76977 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:41.862 20:13:24 -- host/multipath.sh@66 -- # sleep 6 00:21:48.433 20:13:30 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:48.433 20:13:30 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:48.433 20:13:30 -- host/multipath.sh@67 -- # active_port=4421 00:21:48.433 20:13:30 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:48.433 Attaching 4 probes... 00:21:48.433 @path[10.0.0.2, 4421]: 15083 00:21:48.433 @path[10.0.0.2, 4421]: 17980 00:21:48.433 @path[10.0.0.2, 4421]: 20368 00:21:48.433 @path[10.0.0.2, 4421]: 19919 00:21:48.433 @path[10.0.0.2, 4421]: 19858 00:21:48.433 20:13:30 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:48.433 20:13:30 -- host/multipath.sh@69 -- # sed -n 1p 00:21:48.433 20:13:30 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:48.433 20:13:30 -- host/multipath.sh@69 -- # port=4421 00:21:48.433 20:13:30 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:48.433 20:13:30 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:48.433 20:13:30 -- host/multipath.sh@72 -- # kill 77298 00:21:48.433 20:13:30 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:48.433 20:13:30 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:48.433 20:13:30 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:48.433 20:13:30 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:48.433 20:13:30 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:48.433 20:13:30 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 76977 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:48.433 20:13:30 -- host/multipath.sh@65 -- # dtrace_pid=77416 00:21:48.433 20:13:30 -- host/multipath.sh@66 -- # sleep 6 00:21:55.088 20:13:36 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:55.088 20:13:36 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:55.088 20:13:36 -- host/multipath.sh@67 -- # active_port= 00:21:55.088 20:13:36 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:55.088 Attaching 4 probes... 00:21:55.088 00:21:55.088 00:21:55.088 00:21:55.088 00:21:55.088 00:21:55.088 20:13:36 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:55.088 20:13:36 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:55.088 20:13:36 -- host/multipath.sh@69 -- # sed -n 1p 00:21:55.088 20:13:36 -- host/multipath.sh@69 -- # port= 00:21:55.088 20:13:36 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:55.088 20:13:36 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:55.088 20:13:36 -- host/multipath.sh@72 -- # kill 77416 00:21:55.088 20:13:36 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:55.088 20:13:36 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:55.088 20:13:36 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:55.088 20:13:37 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:55.088 20:13:37 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:55.088 20:13:37 -- host/multipath.sh@65 -- # dtrace_pid=77527 00:21:55.088 20:13:37 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 76977 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:55.088 20:13:37 -- host/multipath.sh@66 -- # sleep 6 00:22:01.660 20:13:43 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:01.660 20:13:43 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:01.660 20:13:43 -- host/multipath.sh@67 -- # active_port=4421 00:22:01.660 20:13:43 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:01.660 Attaching 4 probes... 00:22:01.660 @path[10.0.0.2, 4421]: 19190 00:22:01.660 @path[10.0.0.2, 4421]: 20123 00:22:01.660 @path[10.0.0.2, 4421]: 19456 00:22:01.660 @path[10.0.0.2, 4421]: 19758 00:22:01.660 @path[10.0.0.2, 4421]: 19871 00:22:01.660 20:13:43 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:01.660 20:13:43 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:01.660 20:13:43 -- host/multipath.sh@69 -- # sed -n 1p 00:22:01.660 20:13:43 -- host/multipath.sh@69 -- # port=4421 00:22:01.660 20:13:43 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:01.660 20:13:43 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:01.660 20:13:43 -- host/multipath.sh@72 -- # kill 77527 00:22:01.660 20:13:43 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:01.660 20:13:43 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:01.660 [2024-04-24 20:13:43.773783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa54990 is same with the state(5) to be set 00:22:01.660 [2024-04-24 20:13:43.773839] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa54990 is same with the state(5) to be set 00:22:01.660 [2024-04-24 20:13:43.773848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa54990 is same with the state(5) to be set 00:22:01.660 [2024-04-24 20:13:43.773854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa54990 is same with the state(5) to be set 00:22:01.660 [2024-04-24 20:13:43.773860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa54990 is same with the state(5) to be set 00:22:01.660 [2024-04-24 20:13:43.773865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa54990 is same with the state(5) to be set 00:22:01.660 [2024-04-24 20:13:43.773871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa54990 is same with the state(5) to be set 00:22:01.660 [2024-04-24 20:13:43.773877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa54990 is same with the state(5) to be set 00:22:01.660 [2024-04-24 20:13:43.773883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa54990 is same with the state(5) to be set 00:22:01.660 [2024-04-24 20:13:43.773888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa54990 is same with the state(5) to be set 00:22:01.660 [2024-04-24 20:13:43.773893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa54990 is same with the state(5) to be set 00:22:01.660 [2024-04-24 20:13:43.773899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa54990 is same with the state(5) to be set 00:22:01.660 [2024-04-24 20:13:43.773905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa54990 is same with the state(5) to be set 00:22:01.660 [2024-04-24 20:13:43.773912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa54990 is same with the state(5) to be set 00:22:01.660 [2024-04-24 20:13:43.773918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa54990 is same with the state(5) to be set 00:22:01.660 20:13:43 -- host/multipath.sh@101 -- # sleep 1 00:22:02.596 20:13:44 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:02.596 20:13:44 -- host/multipath.sh@65 -- # dtrace_pid=77646 00:22:02.596 20:13:44 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 76977 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:02.596 20:13:44 -- host/multipath.sh@66 -- # sleep 6 00:22:09.193 20:13:50 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:09.193 20:13:50 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:09.193 20:13:51 -- host/multipath.sh@67 -- # active_port=4420 00:22:09.193 20:13:51 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:09.193 Attaching 4 probes... 00:22:09.193 @path[10.0.0.2, 4420]: 17643 00:22:09.193 @path[10.0.0.2, 4420]: 17887 00:22:09.193 @path[10.0.0.2, 4420]: 18176 00:22:09.193 @path[10.0.0.2, 4420]: 18948 00:22:09.193 @path[10.0.0.2, 4420]: 18249 00:22:09.193 20:13:51 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:09.193 20:13:51 -- host/multipath.sh@69 -- # sed -n 1p 00:22:09.193 20:13:51 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:09.193 20:13:51 -- host/multipath.sh@69 -- # port=4420 00:22:09.193 20:13:51 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:09.193 20:13:51 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:09.193 20:13:51 -- host/multipath.sh@72 -- # kill 77646 00:22:09.193 20:13:51 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:09.193 20:13:51 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:09.193 [2024-04-24 20:13:51.269800] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:09.193 20:13:51 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:09.451 20:13:51 -- host/multipath.sh@111 -- # sleep 6 00:22:16.014 20:13:57 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:16.014 20:13:57 -- host/multipath.sh@65 -- # dtrace_pid=77826 00:22:16.014 20:13:57 -- host/multipath.sh@66 -- # sleep 6 00:22:16.014 20:13:57 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 76977 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:22.593 20:14:03 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:22.593 20:14:03 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:22.593 20:14:03 -- host/multipath.sh@67 -- # active_port=4421 00:22:22.593 20:14:03 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:22.593 Attaching 4 probes... 00:22:22.593 @path[10.0.0.2, 4421]: 18899 00:22:22.593 @path[10.0.0.2, 4421]: 19427 00:22:22.593 @path[10.0.0.2, 4421]: 18666 00:22:22.593 @path[10.0.0.2, 4421]: 18542 00:22:22.593 @path[10.0.0.2, 4421]: 18644 00:22:22.593 20:14:03 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:22.593 20:14:03 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:22.593 20:14:03 -- host/multipath.sh@69 -- # sed -n 1p 00:22:22.593 20:14:03 -- host/multipath.sh@69 -- # port=4421 00:22:22.593 20:14:03 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:22.593 20:14:03 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:22.593 20:14:03 -- host/multipath.sh@72 -- # kill 77826 00:22:22.593 20:14:03 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:22.593 20:14:03 -- host/multipath.sh@114 -- # killprocess 77027 00:22:22.593 20:14:03 -- common/autotest_common.sh@936 -- # '[' -z 77027 ']' 00:22:22.593 20:14:03 -- common/autotest_common.sh@940 -- # kill -0 77027 00:22:22.593 20:14:03 -- common/autotest_common.sh@941 -- # uname 00:22:22.593 20:14:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:22.593 20:14:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77027 00:22:22.593 20:14:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:22.593 20:14:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:22.593 killing process with pid 77027 00:22:22.593 20:14:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77027' 00:22:22.593 20:14:03 -- common/autotest_common.sh@955 -- # kill 77027 00:22:22.593 20:14:03 -- common/autotest_common.sh@960 -- # wait 77027 00:22:22.593 Connection closed with partial response: 00:22:22.593 00:22:22.593 00:22:22.593 20:14:04 -- host/multipath.sh@116 -- # wait 77027 00:22:22.593 20:14:04 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:22.593 [2024-04-24 20:13:07.595879] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:22:22.593 [2024-04-24 20:13:07.595998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77027 ] 00:22:22.593 [2024-04-24 20:13:07.738050] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.593 [2024-04-24 20:13:07.842115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.593 Running I/O for 90 seconds... 00:22:22.593 [2024-04-24 20:13:17.324642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.593 [2024-04-24 20:13:17.324718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.324769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.593 [2024-04-24 20:13:17.324782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.324800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.593 [2024-04-24 20:13:17.324810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.324828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.593 [2024-04-24 20:13:17.324838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.324854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.593 [2024-04-24 20:13:17.324865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.324881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.593 [2024-04-24 20:13:17.324892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.324909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.593 [2024-04-24 20:13:17.324919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.324935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.593 [2024-04-24 20:13:17.324946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.325403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.593 [2024-04-24 20:13:17.325424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.325445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.593 [2024-04-24 20:13:17.325455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.325472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.593 [2024-04-24 20:13:17.325503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.325521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.593 [2024-04-24 20:13:17.325531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.325548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.593 [2024-04-24 20:13:17.325559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.325575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.593 [2024-04-24 20:13:17.325586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.325603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.593 [2024-04-24 20:13:17.325613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.325630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.593 [2024-04-24 20:13:17.325641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.325659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.593 [2024-04-24 20:13:17.325670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.325687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.593 [2024-04-24 20:13:17.325697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.325715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.593 [2024-04-24 20:13:17.325726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.325743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.593 [2024-04-24 20:13:17.325753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.325770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.593 [2024-04-24 20:13:17.325780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:22.593 [2024-04-24 20:13:17.325797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.593 [2024-04-24 20:13:17.325808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.325825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.594 [2024-04-24 20:13:17.325835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.325859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.594 [2024-04-24 20:13:17.325870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.325887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.594 [2024-04-24 20:13:17.325898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.325916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.594 [2024-04-24 20:13:17.325926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.325943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.594 [2024-04-24 20:13:17.325954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.325971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.594 [2024-04-24 20:13:17.325982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.325999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.326009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.326037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.326065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.326092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.326577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.326609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.326638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.326674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.326703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.326731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.326760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.326788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.326815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.326843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.594 [2024-04-24 20:13:17.326870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.594 [2024-04-24 20:13:17.326898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.594 [2024-04-24 20:13:17.326926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.594 [2024-04-24 20:13:17.326953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.594 [2024-04-24 20:13:17.326981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.326998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.594 [2024-04-24 20:13:17.327017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.327036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.594 [2024-04-24 20:13:17.327047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.327064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.594 [2024-04-24 20:13:17.327075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.327092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.327103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.327120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.327132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.327149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.594 [2024-04-24 20:13:17.327159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:22.594 [2024-04-24 20:13:17.327176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.595 [2024-04-24 20:13:17.327186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.327203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.595 [2024-04-24 20:13:17.327213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.327230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.595 [2024-04-24 20:13:17.327241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.327826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.595 [2024-04-24 20:13:17.327848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.327870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.595 [2024-04-24 20:13:17.327880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.327898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.595 [2024-04-24 20:13:17.327909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.327925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.595 [2024-04-24 20:13:17.327944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.327962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.595 [2024-04-24 20:13:17.327974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.327991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.595 [2024-04-24 20:13:17.328001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.595 [2024-04-24 20:13:17.328029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.595 [2024-04-24 20:13:17.328057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.595 [2024-04-24 20:13:17.328720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:22.595 [2024-04-24 20:13:17.328737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.328748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.329614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.596 [2024-04-24 20:13:17.329636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.329656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.596 [2024-04-24 20:13:17.329666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.329684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.596 [2024-04-24 20:13:17.329695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.329713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.596 [2024-04-24 20:13:17.329723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.329741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.596 [2024-04-24 20:13:17.329751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.329768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.596 [2024-04-24 20:13:17.329779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.329796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.596 [2024-04-24 20:13:17.329807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.329824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.596 [2024-04-24 20:13:17.329835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.329853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.329874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.329892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.329903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.329920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.329931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.329948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.329959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.329976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.329986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.330023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.330051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.330078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.330107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.330135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.330162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.330189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.330221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.330250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.330278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.330306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.596 [2024-04-24 20:13:17.330336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.596 [2024-04-24 20:13:17.330364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.596 [2024-04-24 20:13:17.330403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.596 [2024-04-24 20:13:17.330430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.596 [2024-04-24 20:13:17.330458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.596 [2024-04-24 20:13:17.330488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.596 [2024-04-24 20:13:17.330523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.596 [2024-04-24 20:13:17.330550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:22.596 [2024-04-24 20:13:17.330568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.596 [2024-04-24 20:13:17.330579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:17.330603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.597 [2024-04-24 20:13:17.330614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:17.330631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.597 [2024-04-24 20:13:17.330641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:17.330659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.597 [2024-04-24 20:13:17.330669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:17.330686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.597 [2024-04-24 20:13:17.330697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:17.330714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.597 [2024-04-24 20:13:17.330725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:17.330742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.597 [2024-04-24 20:13:17.330753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:17.330770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.597 [2024-04-24 20:13:17.330781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.802707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.802772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.802839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.802851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.802869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.802880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.802896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.802907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.802923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.802933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.802973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.802984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.803011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.803038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.597 [2024-04-24 20:13:23.803065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.597 [2024-04-24 20:13:23.803092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.597 [2024-04-24 20:13:23.803119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.597 [2024-04-24 20:13:23.803146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.597 [2024-04-24 20:13:23.803172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.597 [2024-04-24 20:13:23.803199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.597 [2024-04-24 20:13:23.803226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.597 [2024-04-24 20:13:23.803252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.803279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.803315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.803342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.803370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.803408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.803436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.803462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.803490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.803531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.803558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.803585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:22.597 [2024-04-24 20:13:23.803603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.597 [2024-04-24 20:13:23.803614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.803630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.598 [2024-04-24 20:13:23.803641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.803658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.598 [2024-04-24 20:13:23.803675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.803692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.598 [2024-04-24 20:13:23.803702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.803719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.598 [2024-04-24 20:13:23.803729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.803746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.598 [2024-04-24 20:13:23.803768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.803784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.598 [2024-04-24 20:13:23.803794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.803811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.598 [2024-04-24 20:13:23.803820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.803836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.598 [2024-04-24 20:13:23.803846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.803862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.598 [2024-04-24 20:13:23.803872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.803888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.598 [2024-04-24 20:13:23.803897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.803914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.598 [2024-04-24 20:13:23.803924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.803940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.598 [2024-04-24 20:13:23.803949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.803968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.598 [2024-04-24 20:13:23.803978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.803993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.598 [2024-04-24 20:13:23.804004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.804024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.598 [2024-04-24 20:13:23.804034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.804050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.598 [2024-04-24 20:13:23.804059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.804075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.598 [2024-04-24 20:13:23.804085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.804100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.598 [2024-04-24 20:13:23.804110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.804126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.598 [2024-04-24 20:13:23.804136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.804152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.598 [2024-04-24 20:13:23.804162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.804178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.598 [2024-04-24 20:13:23.804187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:22.598 [2024-04-24 20:13:23.804202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.598 [2024-04-24 20:13:23.804212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.599 [2024-04-24 20:13:23.804237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.599 [2024-04-24 20:13:23.804263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.599 [2024-04-24 20:13:23.804288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.599 [2024-04-24 20:13:23.804313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.599 [2024-04-24 20:13:23.804342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.599 [2024-04-24 20:13:23.804367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.599 [2024-04-24 20:13:23.804401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.599 [2024-04-24 20:13:23.804426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.599 [2024-04-24 20:13:23.804451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.599 [2024-04-24 20:13:23.804477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.599 [2024-04-24 20:13:23.804503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.599 [2024-04-24 20:13:23.804529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.599 [2024-04-24 20:13:23.804554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.599 [2024-04-24 20:13:23.804581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.599 [2024-04-24 20:13:23.804618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.599 [2024-04-24 20:13:23.804644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.599 [2024-04-24 20:13:23.804675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.599 [2024-04-24 20:13:23.804701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.599 [2024-04-24 20:13:23.804726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.599 [2024-04-24 20:13:23.804752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.599 [2024-04-24 20:13:23.804777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.599 [2024-04-24 20:13:23.804802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.599 [2024-04-24 20:13:23.804827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.599 [2024-04-24 20:13:23.804853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.599 [2024-04-24 20:13:23.804878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.599 [2024-04-24 20:13:23.804904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.599 [2024-04-24 20:13:23.804931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.599 [2024-04-24 20:13:23.804957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.804972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.599 [2024-04-24 20:13:23.804986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.805002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.599 [2024-04-24 20:13:23.805017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:22.599 [2024-04-24 20:13:23.805338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.599 [2024-04-24 20:13:23.805356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.805978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.805988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.806008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.806018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.806038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.806048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.806076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.600 [2024-04-24 20:13:23.806086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.806106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.600 [2024-04-24 20:13:23.806117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.806137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.600 [2024-04-24 20:13:23.806147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.806168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.600 [2024-04-24 20:13:23.806179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.806199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.600 [2024-04-24 20:13:23.806209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.806229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.600 [2024-04-24 20:13:23.806245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.806266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.600 [2024-04-24 20:13:23.806275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.806296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.600 [2024-04-24 20:13:23.806306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.806336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.600 [2024-04-24 20:13:23.806349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:22.600 [2024-04-24 20:13:23.806369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:23.806389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:23.806409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:23.806419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:23.806439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:23.806449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:23.806475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:23.806486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:23.806506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:23.806515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:23.806543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:23.806569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:23.806591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:23.806602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:23.806623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:23.806634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:23.806655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:23.806666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:23.806687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:23.806698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:23.806719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:23.806730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:23.806751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:23.806762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:23.806783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:23.806795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:23.806817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:23.806827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:23.806849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:23.806859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:23.806881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:23.806898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.621881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:30.621950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.621999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:30.622011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:30.622037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:30.622062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:30.622086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:30.622111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:30.622136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:30.622161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:30.622186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:30.622210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:30.622235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:30.622284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:30.622310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:30.622335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:30.622360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.601 [2024-04-24 20:13:30.622396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.601 [2024-04-24 20:13:30.622420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.601 [2024-04-24 20:13:30.622449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:22.601 [2024-04-24 20:13:30.622464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.601 [2024-04-24 20:13:30.622474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.622499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.622523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.622555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.622580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.622605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.622637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.622662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.622687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.622712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.622737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.622762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.622787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.622813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.602 [2024-04-24 20:13:30.622842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.602 [2024-04-24 20:13:30.622870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.602 [2024-04-24 20:13:30.622896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.602 [2024-04-24 20:13:30.622921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.602 [2024-04-24 20:13:30.622952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.602 [2024-04-24 20:13:30.622977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.622993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.602 [2024-04-24 20:13:30.623002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.623018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.602 [2024-04-24 20:13:30.623028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.623044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.623053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.623069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.623080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.623095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.623105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.623121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.623131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.623147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.623157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.623173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.623183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.623199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.623209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.623225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.623235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.623251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.623266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.623282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.623292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.623308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.623317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.623333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.623343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:22.602 [2024-04-24 20:13:30.623359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.602 [2024-04-24 20:13:30.623368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.623394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.603 [2024-04-24 20:13:30.623405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.623421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.603 [2024-04-24 20:13:30.623430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.623446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.603 [2024-04-24 20:13:30.623456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.623751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.623771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.623792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.623802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.623821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.623831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.623850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.623859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.623878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.623897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.623916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.623926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.623944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.623954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.623973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.623983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.624011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.624020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.624037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.624047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.624063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.624072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.624089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.624097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.624114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.624122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.624139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.624148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.624165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.624174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.624190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.624199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.624215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.624224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.624245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.624254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.624270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.624280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.624296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.624305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.624322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.624331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.624348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.624356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.624374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.624402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:22.603 [2024-04-24 20:13:30.624419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.603 [2024-04-24 20:13:30.624429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.624454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.624481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.624506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.624531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.624576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.624609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.624638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.624667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.624695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.624724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.624770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.624800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.624831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.624861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.624893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.624924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.604 [2024-04-24 20:13:30.624968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.624988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.604 [2024-04-24 20:13:30.625004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.625024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.604 [2024-04-24 20:13:30.625034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.625054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.604 [2024-04-24 20:13:30.625064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.625092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.604 [2024-04-24 20:13:30.625102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.625122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.604 [2024-04-24 20:13:30.625132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.625153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.604 [2024-04-24 20:13:30.625163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.625183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.604 [2024-04-24 20:13:30.625193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.625213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.625224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.625244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.625255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.625275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.625285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.625306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.625316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.625338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.625348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.625369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.625383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.625404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.625425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.625445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.604 [2024-04-24 20:13:30.625456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.625476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.604 [2024-04-24 20:13:30.625486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:22.604 [2024-04-24 20:13:30.625507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:30.625517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:30.625537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:30.625548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:30.625567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:30.625578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:30.625598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:30.625609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:30.625629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:30.625640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:30.625660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:30.625671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:30.625691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:30.625701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:30.625721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:30.625732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:30.625752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:30.625762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:30.625788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:30.625799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:30.625819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:30.625830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:30.625850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:30.625861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:30.625882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:30.625892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:30.625913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:30.625923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:30.625944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:30.625955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.773989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:43.774041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:43.774074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:43.774095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:43.774144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:43.774166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:43.774189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:43.774232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:43.774253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.605 [2024-04-24 20:13:43.774274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.605 [2024-04-24 20:13:43.774295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.605 [2024-04-24 20:13:43.774316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.605 [2024-04-24 20:13:43.774337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.605 [2024-04-24 20:13:43.774357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.605 [2024-04-24 20:13:43.774390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.605 [2024-04-24 20:13:43.774412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.605 [2024-04-24 20:13:43.774436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:43.774458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:43.774479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.605 [2024-04-24 20:13:43.774501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.605 [2024-04-24 20:13:43.774518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.606 [2024-04-24 20:13:43.774529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.606 [2024-04-24 20:13:43.774550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.606 [2024-04-24 20:13:43.774585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.606 [2024-04-24 20:13:43.774606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.606 [2024-04-24 20:13:43.774627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.774648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.774671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.774697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.774718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.774739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.774760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.774781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.774811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.774832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.774853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.774875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.774896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.774916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.774937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.774958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.774979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.774990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.606 [2024-04-24 20:13:43.775000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.775011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.606 [2024-04-24 20:13:43.775021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.775032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.606 [2024-04-24 20:13:43.775041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.775052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.606 [2024-04-24 20:13:43.775062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.775078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.606 [2024-04-24 20:13:43.775088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.775099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.606 [2024-04-24 20:13:43.775109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.775120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.606 [2024-04-24 20:13:43.775129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.775141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.606 [2024-04-24 20:13:43.775150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.775162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.775172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.775184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.775197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.775212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.775225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.775239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.775251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.775266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.775278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.775292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.775305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.775320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.606 [2024-04-24 20:13:43.775333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.606 [2024-04-24 20:13:43.775350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.607 [2024-04-24 20:13:43.775401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.607 [2024-04-24 20:13:43.775422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.607 [2024-04-24 20:13:43.775444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.607 [2024-04-24 20:13:43.775465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.607 [2024-04-24 20:13:43.775486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.607 [2024-04-24 20:13:43.775507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.607 [2024-04-24 20:13:43.775528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:51712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.607 [2024-04-24 20:13:43.775549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:51200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:51248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.607 [2024-04-24 20:13:43.775889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.607 [2024-04-24 20:13:43.775911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:51728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.607 [2024-04-24 20:13:43.775932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.607 [2024-04-24 20:13:43.775957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.607 [2024-04-24 20:13:43.775977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.775989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.607 [2024-04-24 20:13:43.775998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.776010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.607 [2024-04-24 20:13:43.776020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.776031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.607 [2024-04-24 20:13:43.776041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.607 [2024-04-24 20:13:43.776052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:51776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.607 [2024-04-24 20:13:43.776062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.608 [2024-04-24 20:13:43.776083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.608 [2024-04-24 20:13:43.776103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.608 [2024-04-24 20:13:43.776124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.608 [2024-04-24 20:13:43.776145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.608 [2024-04-24 20:13:43.776166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.608 [2024-04-24 20:13:43.776186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.608 [2024-04-24 20:13:43.776211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.608 [2024-04-24 20:13:43.776233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.608 [2024-04-24 20:13:43.776255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.608 [2024-04-24 20:13:43.776276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.608 [2024-04-24 20:13:43.776297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.608 [2024-04-24 20:13:43.776318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.608 [2024-04-24 20:13:43.776339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.608 [2024-04-24 20:13:43.776360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.608 [2024-04-24 20:13:43.776392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.608 [2024-04-24 20:13:43.776414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.608 [2024-04-24 20:13:43.776435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.608 [2024-04-24 20:13:43.776456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.608 [2024-04-24 20:13:43.776477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.608 [2024-04-24 20:13:43.776502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.608 [2024-04-24 20:13:43.776523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.608 [2024-04-24 20:13:43.776544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.608 [2024-04-24 20:13:43.776565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:22.608 [2024-04-24 20:13:43.776587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.608 [2024-04-24 20:13:43.776608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.608 [2024-04-24 20:13:43.776629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.608 [2024-04-24 20:13:43.776650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.608 [2024-04-24 20:13:43.776661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.608 [2024-04-24 20:13:43.776670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.776682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.609 [2024-04-24 20:13:43.776691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.776702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.609 [2024-04-24 20:13:43.776712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.776723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.609 [2024-04-24 20:13:43.776733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.776744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.609 [2024-04-24 20:13:43.776754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.776769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.609 [2024-04-24 20:13:43.776779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.776790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.609 [2024-04-24 20:13:43.776800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.776811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.609 [2024-04-24 20:13:43.776821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.776832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.609 [2024-04-24 20:13:43.776841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.776854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.609 [2024-04-24 20:13:43.776864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.776875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.609 [2024-04-24 20:13:43.776884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.776896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.609 [2024-04-24 20:13:43.776905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.776917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc16d0 is same with the state(5) to be set 00:22:22.609 [2024-04-24 20:13:43.776929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:22.609 [2024-04-24 20:13:43.776936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:22.609 [2024-04-24 20:13:43.776944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51456 len:8 PRP1 0x0 PRP2 0x0 00:22:22.609 [2024-04-24 20:13:43.776953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.777005] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cc16d0 was disconnected and freed. reset controller. 00:22:22.609 [2024-04-24 20:13:43.777113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.609 [2024-04-24 20:13:43.777131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.777145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.609 [2024-04-24 20:13:43.777159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.777174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.609 [2024-04-24 20:13:43.777186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.777208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.609 [2024-04-24 20:13:43.777221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.609 [2024-04-24 20:13:43.777233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc7a20 is same with the state(5) to be set 00:22:22.609 [2024-04-24 20:13:43.778270] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:22.609 [2024-04-24 20:13:43.778306] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc7a20 (9): Bad file descriptor 00:22:22.609 [2024-04-24 20:13:43.778661] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.609 [2024-04-24 20:13:43.778719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.609 [2024-04-24 20:13:43.778754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.609 [2024-04-24 20:13:43.778768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc7a20 with addr=10.0.0.2, port=4421 00:22:22.609 [2024-04-24 20:13:43.778779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc7a20 is same with the state(5) to be set 00:22:22.609 [2024-04-24 20:13:43.778803] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc7a20 (9): Bad file descriptor 00:22:22.609 [2024-04-24 20:13:43.778824] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:22.609 [2024-04-24 20:13:43.778834] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:22.609 [2024-04-24 20:13:43.778845] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:22.609 [2024-04-24 20:13:43.778870] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:22.609 [2024-04-24 20:13:43.778880] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:22.609 [2024-04-24 20:13:53.822214] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:22.609 Received shutdown signal, test time was about 54.634016 seconds 00:22:22.609 00:22:22.609 Latency(us) 00:22:22.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.609 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:22.609 Verification LBA range: start 0x0 length 0x4000 00:22:22.609 Nvme0n1 : 54.63 8006.51 31.28 0.00 0.00 15963.86 912.21 7033243.39 00:22:22.609 =================================================================================================================== 00:22:22.609 Total : 8006.51 31.28 0.00 0.00 15963.86 912.21 7033243.39 00:22:22.609 20:14:04 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:22.609 20:14:04 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:22.609 20:14:04 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:22.609 20:14:04 -- host/multipath.sh@125 -- # nvmftestfini 00:22:22.609 20:14:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:22.609 20:14:04 -- nvmf/common.sh@117 -- # sync 00:22:22.609 20:14:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:22.609 20:14:04 -- nvmf/common.sh@120 -- # set +e 00:22:22.609 20:14:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:22.609 20:14:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:22.609 rmmod nvme_tcp 00:22:22.609 rmmod nvme_fabrics 00:22:22.609 rmmod nvme_keyring 00:22:22.609 20:14:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:22.609 20:14:04 -- nvmf/common.sh@124 -- # set -e 00:22:22.609 20:14:04 -- nvmf/common.sh@125 -- # return 0 00:22:22.609 20:14:04 -- nvmf/common.sh@478 -- # '[' -n 76977 ']' 00:22:22.609 20:14:04 -- nvmf/common.sh@479 -- # killprocess 76977 00:22:22.609 20:14:04 -- common/autotest_common.sh@936 -- # '[' -z 76977 ']' 00:22:22.609 20:14:04 -- common/autotest_common.sh@940 -- # kill -0 76977 00:22:22.609 20:14:04 -- common/autotest_common.sh@941 -- # uname 00:22:22.609 20:14:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:22.609 20:14:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76977 00:22:22.609 20:14:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:22.609 killing process with pid 76977 00:22:22.609 20:14:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:22.609 20:14:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76977' 00:22:22.609 20:14:04 -- common/autotest_common.sh@955 -- # kill 76977 00:22:22.609 [2024-04-24 20:14:04.551717] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:22.609 20:14:04 -- common/autotest_common.sh@960 -- # wait 76977 00:22:22.609 20:14:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:22.609 20:14:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:22.609 20:14:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:22.610 20:14:04 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:22.610 20:14:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:22.610 20:14:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.610 20:14:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.610 20:14:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.916 20:14:04 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:22.916 00:22:22.916 real 1m0.136s 00:22:22.916 user 2m48.655s 00:22:22.916 sys 0m15.915s 00:22:22.916 20:14:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:22.916 20:14:04 -- common/autotest_common.sh@10 -- # set +x 00:22:22.916 ************************************ 00:22:22.916 END TEST nvmf_multipath 00:22:22.916 ************************************ 00:22:22.916 20:14:04 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:22.916 20:14:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:22.916 20:14:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:22.916 20:14:04 -- common/autotest_common.sh@10 -- # set +x 00:22:22.916 ************************************ 00:22:22.916 START TEST nvmf_timeout 00:22:22.916 ************************************ 00:22:22.916 20:14:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:22.916 * Looking for test storage... 00:22:22.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:22.916 20:14:05 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:22.916 20:14:05 -- nvmf/common.sh@7 -- # uname -s 00:22:22.916 20:14:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.916 20:14:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.916 20:14:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.916 20:14:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.916 20:14:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.916 20:14:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.916 20:14:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.916 20:14:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.916 20:14:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.916 20:14:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.916 20:14:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:22:22.916 20:14:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:22:22.916 20:14:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.916 20:14:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.916 20:14:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:22.916 20:14:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.916 20:14:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:22.916 20:14:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.916 20:14:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.916 20:14:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.916 20:14:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.916 20:14:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.916 20:14:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.916 20:14:05 -- paths/export.sh@5 -- # export PATH 00:22:22.916 20:14:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.916 20:14:05 -- nvmf/common.sh@47 -- # : 0 00:22:22.916 20:14:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.916 20:14:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.916 20:14:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.916 20:14:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.916 20:14:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.916 20:14:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.916 20:14:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.916 20:14:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.916 20:14:05 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:22.916 20:14:05 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:22.916 20:14:05 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:22.916 20:14:05 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:22.916 20:14:05 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:22.916 20:14:05 -- host/timeout.sh@19 -- # nvmftestinit 00:22:22.916 20:14:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:22.916 20:14:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.916 20:14:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:22.916 20:14:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:22.916 20:14:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:22.916 20:14:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.916 20:14:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.916 20:14:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.916 20:14:05 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:22.916 20:14:05 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:22.916 20:14:05 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:22.916 20:14:05 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:22.916 20:14:05 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:22.916 20:14:05 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:22.916 20:14:05 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.916 20:14:05 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.916 20:14:05 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:22.916 20:14:05 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:22.916 20:14:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:22.917 20:14:05 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:22.917 20:14:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:22.917 20:14:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.917 20:14:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:22.917 20:14:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:22.917 20:14:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:22.917 20:14:05 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:22.917 20:14:05 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:22.917 20:14:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:23.175 Cannot find device "nvmf_tgt_br" 00:22:23.175 20:14:05 -- nvmf/common.sh@155 -- # true 00:22:23.175 20:14:05 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:23.175 Cannot find device "nvmf_tgt_br2" 00:22:23.175 20:14:05 -- nvmf/common.sh@156 -- # true 00:22:23.175 20:14:05 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:23.175 20:14:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:23.175 Cannot find device "nvmf_tgt_br" 00:22:23.175 20:14:05 -- nvmf/common.sh@158 -- # true 00:22:23.175 20:14:05 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:23.175 Cannot find device "nvmf_tgt_br2" 00:22:23.175 20:14:05 -- nvmf/common.sh@159 -- # true 00:22:23.175 20:14:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:23.175 20:14:05 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:23.175 20:14:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:23.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:23.175 20:14:05 -- nvmf/common.sh@162 -- # true 00:22:23.175 20:14:05 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:23.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:23.175 20:14:05 -- nvmf/common.sh@163 -- # true 00:22:23.175 20:14:05 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:23.175 20:14:05 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:23.175 20:14:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:23.176 20:14:05 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:23.176 20:14:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:23.176 20:14:05 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:23.176 20:14:05 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:23.176 20:14:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:23.176 20:14:05 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:23.176 20:14:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:23.176 20:14:05 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:23.176 20:14:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:23.176 20:14:05 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:23.176 20:14:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:23.176 20:14:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:23.176 20:14:05 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:23.176 20:14:05 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:23.176 20:14:05 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:23.176 20:14:05 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:23.434 20:14:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:23.434 20:14:05 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:23.434 20:14:05 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:23.434 20:14:05 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:23.434 20:14:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:23.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:22:23.434 00:22:23.434 --- 10.0.0.2 ping statistics --- 00:22:23.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.434 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:22:23.434 20:14:05 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:23.434 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:23.434 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:22:23.434 00:22:23.434 --- 10.0.0.3 ping statistics --- 00:22:23.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.434 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:23.434 20:14:05 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:23.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:22:23.434 00:22:23.434 --- 10.0.0.1 ping statistics --- 00:22:23.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.434 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:22:23.434 20:14:05 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.434 20:14:05 -- nvmf/common.sh@422 -- # return 0 00:22:23.434 20:14:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:23.434 20:14:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.434 20:14:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:23.434 20:14:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:23.434 20:14:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.434 20:14:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:23.434 20:14:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:23.434 20:14:05 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:23.434 20:14:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:23.434 20:14:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:23.434 20:14:05 -- common/autotest_common.sh@10 -- # set +x 00:22:23.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.434 20:14:05 -- nvmf/common.sh@470 -- # nvmfpid=78140 00:22:23.434 20:14:05 -- nvmf/common.sh@471 -- # waitforlisten 78140 00:22:23.434 20:14:05 -- common/autotest_common.sh@817 -- # '[' -z 78140 ']' 00:22:23.434 20:14:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.434 20:14:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:23.434 20:14:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.434 20:14:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:23.434 20:14:05 -- common/autotest_common.sh@10 -- # set +x 00:22:23.434 20:14:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:23.434 [2024-04-24 20:14:05.561742] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:22:23.434 [2024-04-24 20:14:05.561830] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.693 [2024-04-24 20:14:05.702841] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:23.693 [2024-04-24 20:14:05.848604] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.693 [2024-04-24 20:14:05.848653] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.693 [2024-04-24 20:14:05.848659] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.693 [2024-04-24 20:14:05.848665] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.693 [2024-04-24 20:14:05.848670] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.693 [2024-04-24 20:14:05.848889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.693 [2024-04-24 20:14:05.848903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.259 20:14:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:24.259 20:14:06 -- common/autotest_common.sh@850 -- # return 0 00:22:24.259 20:14:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:24.259 20:14:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:24.259 20:14:06 -- common/autotest_common.sh@10 -- # set +x 00:22:24.259 20:14:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.259 20:14:06 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:24.259 20:14:06 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:24.518 [2024-04-24 20:14:06.768313] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.776 20:14:06 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:25.034 Malloc0 00:22:25.034 20:14:07 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:25.293 20:14:07 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:25.551 20:14:07 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.551 [2024-04-24 20:14:07.797234] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:25.551 [2024-04-24 20:14:07.797507] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.809 20:14:07 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:25.809 20:14:07 -- host/timeout.sh@32 -- # bdevperf_pid=78195 00:22:25.809 20:14:07 -- host/timeout.sh@34 -- # waitforlisten 78195 /var/tmp/bdevperf.sock 00:22:25.809 20:14:07 -- common/autotest_common.sh@817 -- # '[' -z 78195 ']' 00:22:25.809 20:14:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.809 20:14:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:25.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.809 20:14:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.809 20:14:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:25.809 20:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:25.809 [2024-04-24 20:14:07.866620] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:22:25.809 [2024-04-24 20:14:07.866722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78195 ] 00:22:25.809 [2024-04-24 20:14:07.994450] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.066 [2024-04-24 20:14:08.101726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.633 20:14:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:26.633 20:14:08 -- common/autotest_common.sh@850 -- # return 0 00:22:26.633 20:14:08 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:26.892 20:14:09 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:27.170 NVMe0n1 00:22:27.170 20:14:09 -- host/timeout.sh@51 -- # rpc_pid=78217 00:22:27.170 20:14:09 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:27.170 20:14:09 -- host/timeout.sh@53 -- # sleep 1 00:22:27.460 Running I/O for 10 seconds... 00:22:28.396 20:14:10 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.396 [2024-04-24 20:14:10.508811] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.396 [2024-04-24 20:14:10.508876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.396 [2024-04-24 20:14:10.508884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.396 [2024-04-24 20:14:10.508890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.396 [2024-04-24 20:14:10.508896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.396 [2024-04-24 20:14:10.508902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.396 [2024-04-24 20:14:10.508908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.396 [2024-04-24 20:14:10.508914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.396 [2024-04-24 20:14:10.508920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.396 [2024-04-24 20:14:10.508926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.396 [2024-04-24 20:14:10.508931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.396 [2024-04-24 20:14:10.508937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.396 [2024-04-24 20:14:10.508942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.396 [2024-04-24 20:14:10.508948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.397 [2024-04-24 20:14:10.508954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.397 [2024-04-24 20:14:10.508959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.397 [2024-04-24 20:14:10.508965] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.397 [2024-04-24 20:14:10.508971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.397 [2024-04-24 20:14:10.508976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdb0 is same with the state(5) to be set 00:22:28.397 [2024-04-24 20:14:10.509037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.397 [2024-04-24 20:14:10.509627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.397 [2024-04-24 20:14:10.509649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.397 [2024-04-24 20:14:10.509673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.397 [2024-04-24 20:14:10.509697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.397 [2024-04-24 20:14:10.509724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.397 [2024-04-24 20:14:10.509749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.397 [2024-04-24 20:14:10.509774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.397 [2024-04-24 20:14:10.509798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.509935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.509946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.510058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.510073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.510082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.397 [2024-04-24 20:14:10.510091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.510103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.397 [2024-04-24 20:14:10.510114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.397 [2024-04-24 20:14:10.510129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.398 [2024-04-24 20:14:10.510700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.398 [2024-04-24 20:14:10.510724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.398 [2024-04-24 20:14:10.510748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.398 [2024-04-24 20:14:10.510772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.398 [2024-04-24 20:14:10.510798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.398 [2024-04-24 20:14:10.510823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.398 [2024-04-24 20:14:10.510860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.398 [2024-04-24 20:14:10.510887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.510984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.510999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.511009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.511024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.511035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.511047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.511059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.511070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.511083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.511095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.398 [2024-04-24 20:14:10.511107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.398 [2024-04-24 20:14:10.511120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.399 [2024-04-24 20:14:10.511319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.399 [2024-04-24 20:14:10.511344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.399 [2024-04-24 20:14:10.511368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.399 [2024-04-24 20:14:10.511401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.399 [2024-04-24 20:14:10.511423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.399 [2024-04-24 20:14:10.511445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.399 [2024-04-24 20:14:10.511468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.399 [2024-04-24 20:14:10.511495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.511976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:28.399 [2024-04-24 20:14:10.511986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.512003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.399 [2024-04-24 20:14:10.512016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.512029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.399 [2024-04-24 20:14:10.512041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.512055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.399 [2024-04-24 20:14:10.512065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.512079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.399 [2024-04-24 20:14:10.512088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.399 [2024-04-24 20:14:10.512102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.400 [2024-04-24 20:14:10.512113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.400 [2024-04-24 20:14:10.512127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.400 [2024-04-24 20:14:10.512136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.400 [2024-04-24 20:14:10.512150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.400 [2024-04-24 20:14:10.512160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.400 [2024-04-24 20:14:10.512174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.400 [2024-04-24 20:14:10.512186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.400 [2024-04-24 20:14:10.512196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.400 [2024-04-24 20:14:10.512208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.400 [2024-04-24 20:14:10.512219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.400 [2024-04-24 20:14:10.512232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.400 [2024-04-24 20:14:10.512248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.400 [2024-04-24 20:14:10.512257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.400 [2024-04-24 20:14:10.512269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.400 [2024-04-24 20:14:10.512280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.400 [2024-04-24 20:14:10.512294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.400 [2024-04-24 20:14:10.512304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.400 [2024-04-24 20:14:10.512318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.400 [2024-04-24 20:14:10.512327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.400 [2024-04-24 20:14:10.512342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:28.400 [2024-04-24 20:14:10.512352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.400 [2024-04-24 20:14:10.512366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17528a0 is same with the state(5) to be set 00:22:28.400 [2024-04-24 20:14:10.512391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:28.400 [2024-04-24 20:14:10.512400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:28.400 [2024-04-24 20:14:10.512406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78584 len:8 PRP1 0x0 PRP2 0x0 00:22:28.400 [2024-04-24 20:14:10.512416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:28.400 [2024-04-24 20:14:10.512498] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17528a0 was disconnected and freed. reset controller. 00:22:28.400 [2024-04-24 20:14:10.512766] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:28.400 [2024-04-24 20:14:10.512860] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16eadc0 (9): Bad file descriptor 00:22:28.400 [2024-04-24 20:14:10.512960] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:28.400 [2024-04-24 20:14:10.513014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:28.400 [2024-04-24 20:14:10.513040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:28.400 [2024-04-24 20:14:10.513050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16eadc0 with addr=10.0.0.2, port=4420 00:22:28.400 [2024-04-24 20:14:10.513058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eadc0 is same with the state(5) to be set 00:22:28.400 [2024-04-24 20:14:10.513072] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16eadc0 (9): Bad file descriptor 00:22:28.400 [2024-04-24 20:14:10.513083] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:28.400 [2024-04-24 20:14:10.513089] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:28.400 [2024-04-24 20:14:10.513097] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:28.400 [2024-04-24 20:14:10.513114] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:28.400 [2024-04-24 20:14:10.513121] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:28.400 20:14:10 -- host/timeout.sh@56 -- # sleep 2 00:22:30.299 [2024-04-24 20:14:12.509484] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.299 [2024-04-24 20:14:12.509573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.299 [2024-04-24 20:14:12.509602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.299 [2024-04-24 20:14:12.509614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16eadc0 with addr=10.0.0.2, port=4420 00:22:30.299 [2024-04-24 20:14:12.509625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eadc0 is same with the state(5) to be set 00:22:30.299 [2024-04-24 20:14:12.509649] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16eadc0 (9): Bad file descriptor 00:22:30.299 [2024-04-24 20:14:12.509672] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:30.300 [2024-04-24 20:14:12.509680] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:30.300 [2024-04-24 20:14:12.509687] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:30.300 [2024-04-24 20:14:12.509711] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:30.300 [2024-04-24 20:14:12.509720] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:30.300 20:14:12 -- host/timeout.sh@57 -- # get_controller 00:22:30.300 20:14:12 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:30.300 20:14:12 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:30.557 20:14:12 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:30.558 20:14:12 -- host/timeout.sh@58 -- # get_bdev 00:22:30.558 20:14:12 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:30.558 20:14:12 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:30.816 20:14:13 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:30.816 20:14:13 -- host/timeout.sh@61 -- # sleep 5 00:22:32.718 [2024-04-24 20:14:14.506121] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.718 [2024-04-24 20:14:14.506211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.718 [2024-04-24 20:14:14.506238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.718 [2024-04-24 20:14:14.506248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16eadc0 with addr=10.0.0.2, port=4420 00:22:32.718 [2024-04-24 20:14:14.506258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16eadc0 is same with the state(5) to be set 00:22:32.718 [2024-04-24 20:14:14.506282] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16eadc0 (9): Bad file descriptor 00:22:32.718 [2024-04-24 20:14:14.506297] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:32.718 [2024-04-24 20:14:14.506304] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:32.718 [2024-04-24 20:14:14.506311] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:32.718 [2024-04-24 20:14:14.506335] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:32.718 [2024-04-24 20:14:14.506343] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:34.621 [2024-04-24 20:14:16.502603] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:35.558 00:22:35.558 Latency(us) 00:22:35.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.558 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:35.558 Verification LBA range: start 0x0 length 0x4000 00:22:35.558 NVMe0n1 : 8.09 1206.48 4.71 15.82 0.00 104797.98 3562.98 7033243.39 00:22:35.558 =================================================================================================================== 00:22:35.558 Total : 1206.48 4.71 15.82 0.00 104797.98 3562.98 7033243.39 00:22:35.558 0 00:22:35.817 20:14:18 -- host/timeout.sh@62 -- # get_controller 00:22:35.817 20:14:18 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:35.817 20:14:18 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:36.075 20:14:18 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:36.075 20:14:18 -- host/timeout.sh@63 -- # get_bdev 00:22:36.075 20:14:18 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:36.075 20:14:18 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:36.334 20:14:18 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:36.334 20:14:18 -- host/timeout.sh@65 -- # wait 78217 00:22:36.334 20:14:18 -- host/timeout.sh@67 -- # killprocess 78195 00:22:36.334 20:14:18 -- common/autotest_common.sh@936 -- # '[' -z 78195 ']' 00:22:36.334 20:14:18 -- common/autotest_common.sh@940 -- # kill -0 78195 00:22:36.334 20:14:18 -- common/autotest_common.sh@941 -- # uname 00:22:36.334 20:14:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:36.334 20:14:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78195 00:22:36.334 20:14:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:36.334 20:14:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:36.334 killing process with pid 78195 00:22:36.334 20:14:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78195' 00:22:36.334 20:14:18 -- common/autotest_common.sh@955 -- # kill 78195 00:22:36.334 Received shutdown signal, test time was about 9.105418 seconds 00:22:36.334 00:22:36.334 Latency(us) 00:22:36.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.334 =================================================================================================================== 00:22:36.334 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.334 20:14:18 -- common/autotest_common.sh@960 -- # wait 78195 00:22:36.594 20:14:18 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:36.853 [2024-04-24 20:14:18.927700] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.853 20:14:18 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:36.853 20:14:18 -- host/timeout.sh@74 -- # bdevperf_pid=78341 00:22:36.853 20:14:18 -- host/timeout.sh@76 -- # waitforlisten 78341 /var/tmp/bdevperf.sock 00:22:36.853 20:14:18 -- common/autotest_common.sh@817 -- # '[' -z 78341 ']' 00:22:36.853 20:14:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.853 20:14:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:36.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.853 20:14:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.853 20:14:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:36.853 20:14:18 -- common/autotest_common.sh@10 -- # set +x 00:22:36.853 [2024-04-24 20:14:18.994979] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:22:36.853 [2024-04-24 20:14:18.995071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78341 ] 00:22:37.112 [2024-04-24 20:14:19.139819] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.112 [2024-04-24 20:14:19.244055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.679 20:14:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:37.679 20:14:19 -- common/autotest_common.sh@850 -- # return 0 00:22:37.679 20:14:19 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:37.937 20:14:20 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:38.195 NVMe0n1 00:22:38.457 20:14:20 -- host/timeout.sh@84 -- # rpc_pid=78359 00:22:38.457 20:14:20 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:38.457 20:14:20 -- host/timeout.sh@86 -- # sleep 1 00:22:38.457 Running I/O for 10 seconds... 00:22:39.391 20:14:21 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.652 [2024-04-24 20:14:21.740028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.652 [2024-04-24 20:14:21.740092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740107] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3c90 is same with the state(5) to be set 00:22:39.653 [2024-04-24 20:14:21.740267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.653 [2024-04-24 20:14:21.740768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.653 [2024-04-24 20:14:21.740785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.653 [2024-04-24 20:14:21.740802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.653 [2024-04-24 20:14:21.740819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.653 [2024-04-24 20:14:21.740837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.653 [2024-04-24 20:14:21.740854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.653 [2024-04-24 20:14:21.740862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.740871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.740879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.740885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.740896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.740906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.740915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.740924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.740932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.740952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.740961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.740967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.740975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.740984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.740995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.741002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.741019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.741036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.741052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.741342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.741357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.741383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.741398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.741415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.741432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.741449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.654 [2024-04-24 20:14:21.741468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.654 [2024-04-24 20:14:21.741555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.654 [2024-04-24 20:14:21.741564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.655 [2024-04-24 20:14:21.741570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.655 [2024-04-24 20:14:21.741588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.655 [2024-04-24 20:14:21.741608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.741987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.741995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.742004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.742013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.742024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.742034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.655 [2024-04-24 20:14:21.742041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.742051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.655 [2024-04-24 20:14:21.742058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.742069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.655 [2024-04-24 20:14:21.742075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.742086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.655 [2024-04-24 20:14:21.742092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.742103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.655 [2024-04-24 20:14:21.742110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.742121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.655 [2024-04-24 20:14:21.742127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.742138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.655 [2024-04-24 20:14:21.742145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.742153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.655 [2024-04-24 20:14:21.742162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.742170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.742179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.742187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.742196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.742204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.742212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.742221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.742230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.655 [2024-04-24 20:14:21.742238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.655 [2024-04-24 20:14:21.742247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.656 [2024-04-24 20:14:21.742264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.656 [2024-04-24 20:14:21.742281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.656 [2024-04-24 20:14:21.742300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.656 [2024-04-24 20:14:21.742317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.656 [2024-04-24 20:14:21.742332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.656 [2024-04-24 20:14:21.742352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.656 [2024-04-24 20:14:21.742370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.656 [2024-04-24 20:14:21.742399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.656 [2024-04-24 20:14:21.742416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.656 [2024-04-24 20:14:21.742433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:39.656 [2024-04-24 20:14:21.742450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.656 [2024-04-24 20:14:21.742467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.656 [2024-04-24 20:14:21.742483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.656 [2024-04-24 20:14:21.742501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.656 [2024-04-24 20:14:21.742518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.656 [2024-04-24 20:14:21.742535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.656 [2024-04-24 20:14:21.742552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.656 [2024-04-24 20:14:21.742566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c5af0 is same with the state(5) to be set 00:22:39.656 [2024-04-24 20:14:21.742596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:39.656 [2024-04-24 20:14:21.742620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:39.656 [2024-04-24 20:14:21.742628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83584 len:8 PRP1 0x0 PRP2 0x0 00:22:39.656 [2024-04-24 20:14:21.742635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.656 [2024-04-24 20:14:21.742691] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20c5af0 was disconnected and freed. reset controller. 00:22:39.656 [2024-04-24 20:14:21.742925] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:39.656 [2024-04-24 20:14:21.743007] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205ddc0 (9): Bad file descriptor 00:22:39.656 [2024-04-24 20:14:21.743093] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.656 [2024-04-24 20:14:21.743143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.656 [2024-04-24 20:14:21.743175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:39.656 [2024-04-24 20:14:21.743185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ddc0 with addr=10.0.0.2, port=4420 00:22:39.656 [2024-04-24 20:14:21.743194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205ddc0 is same with the state(5) to be set 00:22:39.656 [2024-04-24 20:14:21.743208] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205ddc0 (9): Bad file descriptor 00:22:39.656 [2024-04-24 20:14:21.743236] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:39.656 [2024-04-24 20:14:21.743246] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:39.656 [2024-04-24 20:14:21.743254] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:39.656 [2024-04-24 20:14:21.743273] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:39.656 [2024-04-24 20:14:21.743283] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:39.656 20:14:21 -- host/timeout.sh@90 -- # sleep 1 00:22:40.591 [2024-04-24 20:14:22.741509] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.592 [2024-04-24 20:14:22.741602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.592 [2024-04-24 20:14:22.741629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:40.592 [2024-04-24 20:14:22.741640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ddc0 with addr=10.0.0.2, port=4420 00:22:40.592 [2024-04-24 20:14:22.741652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205ddc0 is same with the state(5) to be set 00:22:40.592 [2024-04-24 20:14:22.741673] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205ddc0 (9): Bad file descriptor 00:22:40.592 [2024-04-24 20:14:22.741696] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:40.592 [2024-04-24 20:14:22.741703] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:40.592 [2024-04-24 20:14:22.741711] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:40.592 [2024-04-24 20:14:22.741735] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:40.592 [2024-04-24 20:14:22.741743] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:40.592 20:14:22 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.851 [2024-04-24 20:14:23.022686] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.851 20:14:23 -- host/timeout.sh@92 -- # wait 78359 00:22:41.786 [2024-04-24 20:14:23.758163] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:48.346 00:22:48.346 Latency(us) 00:22:48.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.347 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:48.347 Verification LBA range: start 0x0 length 0x4000 00:22:48.347 NVMe0n1 : 10.01 6593.85 25.76 0.00 0.00 19373.62 1266.36 3018433.62 00:22:48.347 =================================================================================================================== 00:22:48.347 Total : 6593.85 25.76 0.00 0.00 19373.62 1266.36 3018433.62 00:22:48.347 0 00:22:48.347 20:14:30 -- host/timeout.sh@97 -- # rpc_pid=78469 00:22:48.347 20:14:30 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:48.347 20:14:30 -- host/timeout.sh@98 -- # sleep 1 00:22:48.605 Running I/O for 10 seconds... 00:22:49.541 20:14:31 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.803 [2024-04-24 20:14:31.872223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4c40 is same with the state(5) to be set 00:22:49.803 [2024-04-24 20:14:31.872285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4c40 is same with the state(5) to be set 00:22:49.803 [2024-04-24 20:14:31.872293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4c40 is same with the state(5) to be set 00:22:49.803 [2024-04-24 20:14:31.872299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4c40 is same with the state(5) to be set 00:22:49.803 [2024-04-24 20:14:31.872305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4c40 is same with the state(5) to be set 00:22:49.803 [2024-04-24 20:14:31.872311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4c40 is same with the state(5) to be set 00:22:49.803 [2024-04-24 20:14:31.872316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4c40 is same with the state(5) to be set 00:22:49.803 [2024-04-24 20:14:31.872322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4c40 is same with the state(5) to be set 00:22:49.803 [2024-04-24 20:14:31.872328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4c40 is same with the state(5) to be set 00:22:49.803 [2024-04-24 20:14:31.872334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4c40 is same with the state(5) to be set 00:22:49.803 [2024-04-24 20:14:31.872340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4c40 is same with the state(5) to be set 00:22:49.803 [2024-04-24 20:14:31.872345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4c40 is same with the state(5) to be set 00:22:49.803 [2024-04-24 20:14:31.872351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4c40 is same with the state(5) to be set 00:22:49.803 [2024-04-24 20:14:31.872356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4c40 is same with the state(5) to be set 00:22:49.803 [2024-04-24 20:14:31.872362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f4c40 is same with the state(5) to be set 00:22:49.803 [2024-04-24 20:14:31.872424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.803 [2024-04-24 20:14:31.872458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.803 [2024-04-24 20:14:31.872485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.803 [2024-04-24 20:14:31.872500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.803 [2024-04-24 20:14:31.872514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.803 [2024-04-24 20:14:31.872529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.803 [2024-04-24 20:14:31.872543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.803 [2024-04-24 20:14:31.872558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.803 [2024-04-24 20:14:31.872573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.803 [2024-04-24 20:14:31.872587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.803 [2024-04-24 20:14:31.872600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.803 [2024-04-24 20:14:31.872614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.803 [2024-04-24 20:14:31.872628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.803 [2024-04-24 20:14:31.872642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.803 [2024-04-24 20:14:31.872656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.803 [2024-04-24 20:14:31.872670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.803 [2024-04-24 20:14:31.872684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.803 [2024-04-24 20:14:31.872701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.803 [2024-04-24 20:14:31.872715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.803 [2024-04-24 20:14:31.872740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.803 [2024-04-24 20:14:31.872755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.803 [2024-04-24 20:14:31.872770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.803 [2024-04-24 20:14:31.872784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.803 [2024-04-24 20:14:31.872792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.804 [2024-04-24 20:14:31.872801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.872809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.804 [2024-04-24 20:14:31.872816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.872824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.872831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.872839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.872845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.872853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.872862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.872870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.872876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.872884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.872891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.872899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.872905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.872913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.872919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.872930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.872937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.872946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.872952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.872960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.872966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.872974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.872980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.872988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.872995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.873005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.873012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.873020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.873026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.873034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.873040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.873048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.873055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.873063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.873072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.873080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.873086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.873094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.873100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.873108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.873114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.873123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.873149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.873157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.873164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.873175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.873181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.873189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.804 [2024-04-24 20:14:31.873196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.873204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.804 [2024-04-24 20:14:31.873211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.804 [2024-04-24 20:14:31.873222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.804 [2024-04-24 20:14:31.873230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.805 [2024-04-24 20:14:31.873244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.805 [2024-04-24 20:14:31.873259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.805 [2024-04-24 20:14:31.873277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.805 [2024-04-24 20:14:31.873291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.805 [2024-04-24 20:14:31.873309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.805 [2024-04-24 20:14:31.873323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.805 [2024-04-24 20:14:31.873337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.805 [2024-04-24 20:14:31.873354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.805 [2024-04-24 20:14:31.873368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.805 [2024-04-24 20:14:31.873396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.805 [2024-04-24 20:14:31.873410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.805 [2024-04-24 20:14:31.873424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.805 [2024-04-24 20:14:31.873440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.805 [2024-04-24 20:14:31.873454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.805 [2024-04-24 20:14:31.873469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.805 [2024-04-24 20:14:31.873484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.805 [2024-04-24 20:14:31.873503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.805 [2024-04-24 20:14:31.873518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.805 [2024-04-24 20:14:31.873532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.805 [2024-04-24 20:14:31.873549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.805 [2024-04-24 20:14:31.873563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.805 [2024-04-24 20:14:31.873577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.805 [2024-04-24 20:14:31.873595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.805 [2024-04-24 20:14:31.873609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.805 [2024-04-24 20:14:31.873624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.805 [2024-04-24 20:14:31.873642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.805 [2024-04-24 20:14:31.873650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.805 [2024-04-24 20:14:31.873656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.806 [2024-04-24 20:14:31.873673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.806 [2024-04-24 20:14:31.873688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.806 [2024-04-24 20:14:31.873704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.873721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.873736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.873753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.873768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.873783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.873801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.873815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.873832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.806 [2024-04-24 20:14:31.873847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.806 [2024-04-24 20:14:31.873861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.806 [2024-04-24 20:14:31.873878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.806 [2024-04-24 20:14:31.873893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.806 [2024-04-24 20:14:31.873910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.806 [2024-04-24 20:14:31.873925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.806 [2024-04-24 20:14:31.873939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.806 [2024-04-24 20:14:31.873965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.873980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.873988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.873995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.874003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.874009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.874019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.874026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.874034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.874039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.874048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.874054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.874062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.874071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.874079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.874085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.806 [2024-04-24 20:14:31.874093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.806 [2024-04-24 20:14:31.874099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.807 [2024-04-24 20:14:31.874113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.807 [2024-04-24 20:14:31.874130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.807 [2024-04-24 20:14:31.874145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.807 [2024-04-24 20:14:31.874159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.807 [2024-04-24 20:14:31.874174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.807 [2024-04-24 20:14:31.874192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.807 [2024-04-24 20:14:31.874210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.807 [2024-04-24 20:14:31.874227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.807 [2024-04-24 20:14:31.874241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.807 [2024-04-24 20:14:31.874256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.807 [2024-04-24 20:14:31.874270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.807 [2024-04-24 20:14:31.874288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.807 [2024-04-24 20:14:31.874302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.807 [2024-04-24 20:14:31.874316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.807 [2024-04-24 20:14:31.874332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.807 [2024-04-24 20:14:31.874347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.807 [2024-04-24 20:14:31.874361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.807 [2024-04-24 20:14:31.874386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.807 [2024-04-24 20:14:31.874401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.807 [2024-04-24 20:14:31.874415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.807 [2024-04-24 20:14:31.874430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.807 [2024-04-24 20:14:31.874444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6250 is same with the state(5) to be set 00:22:49.807 [2024-04-24 20:14:31.874464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:49.807 [2024-04-24 20:14:31.874470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:49.807 [2024-04-24 20:14:31.874475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86584 len:8 PRP1 0x0 PRP2 0x0 00:22:49.807 [2024-04-24 20:14:31.874484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.807 [2024-04-24 20:14:31.874532] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20c6250 was disconnected and freed. reset controller. 00:22:49.807 [2024-04-24 20:14:31.874767] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:49.807 [2024-04-24 20:14:31.874844] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205ddc0 (9): Bad file descriptor 00:22:49.807 [2024-04-24 20:14:31.874924] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.807 [2024-04-24 20:14:31.874959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.807 [2024-04-24 20:14:31.874987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.807 [2024-04-24 20:14:31.874997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ddc0 with addr=10.0.0.2, port=4420 00:22:49.807 [2024-04-24 20:14:31.875004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205ddc0 is same with the state(5) to be set 00:22:49.807 [2024-04-24 20:14:31.875016] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205ddc0 (9): Bad file descriptor 00:22:49.808 [2024-04-24 20:14:31.875027] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:49.808 [2024-04-24 20:14:31.875033] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:49.808 [2024-04-24 20:14:31.875044] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:49.808 [2024-04-24 20:14:31.875059] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.808 [2024-04-24 20:14:31.875066] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:49.808 20:14:31 -- host/timeout.sh@101 -- # sleep 3 00:22:50.752 [2024-04-24 20:14:32.873271] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.752 [2024-04-24 20:14:32.873355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.752 [2024-04-24 20:14:32.873390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.752 [2024-04-24 20:14:32.873400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ddc0 with addr=10.0.0.2, port=4420 00:22:50.752 [2024-04-24 20:14:32.873412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205ddc0 is same with the state(5) to be set 00:22:50.752 [2024-04-24 20:14:32.873434] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205ddc0 (9): Bad file descriptor 00:22:50.752 [2024-04-24 20:14:32.873448] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:50.752 [2024-04-24 20:14:32.873455] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:50.752 [2024-04-24 20:14:32.873463] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.752 [2024-04-24 20:14:32.873487] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.752 [2024-04-24 20:14:32.873496] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:51.688 [2024-04-24 20:14:33.871714] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.688 [2024-04-24 20:14:33.871819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.688 [2024-04-24 20:14:33.871846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.688 [2024-04-24 20:14:33.871856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ddc0 with addr=10.0.0.2, port=4420 00:22:51.688 [2024-04-24 20:14:33.871867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205ddc0 is same with the state(5) to be set 00:22:51.688 [2024-04-24 20:14:33.871888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205ddc0 (9): Bad file descriptor 00:22:51.688 [2024-04-24 20:14:33.871902] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:51.688 [2024-04-24 20:14:33.871909] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:51.688 [2024-04-24 20:14:33.871916] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:51.688 [2024-04-24 20:14:33.871939] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.688 [2024-04-24 20:14:33.871948] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.626 [2024-04-24 20:14:34.872854] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.627 [2024-04-24 20:14:34.872929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.627 [2024-04-24 20:14:34.872955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.627 [2024-04-24 20:14:34.872964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ddc0 with addr=10.0.0.2, port=4420 00:22:52.627 [2024-04-24 20:14:34.872974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205ddc0 is same with the state(5) to be set 00:22:52.627 [2024-04-24 20:14:34.873211] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205ddc0 (9): Bad file descriptor 00:22:52.627 [2024-04-24 20:14:34.873448] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:52.627 [2024-04-24 20:14:34.873463] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:52.627 [2024-04-24 20:14:34.873471] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:52.627 [2024-04-24 20:14:34.876798] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:52.627 [2024-04-24 20:14:34.876836] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.885 20:14:34 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.885 [2024-04-24 20:14:35.111835] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.143 20:14:35 -- host/timeout.sh@103 -- # wait 78469 00:22:53.710 [2024-04-24 20:14:35.906633] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:58.981 00:22:58.981 Latency(us) 00:22:58.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.981 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:58.981 Verification LBA range: start 0x0 length 0x4000 00:22:58.981 NVMe0n1 : 10.01 5647.26 22.06 4459.27 0.00 12640.86 533.02 3018433.62 00:22:58.981 =================================================================================================================== 00:22:58.981 Total : 5647.26 22.06 4459.27 0.00 12640.86 0.00 3018433.62 00:22:58.981 0 00:22:58.981 20:14:40 -- host/timeout.sh@105 -- # killprocess 78341 00:22:58.981 20:14:40 -- common/autotest_common.sh@936 -- # '[' -z 78341 ']' 00:22:58.981 20:14:40 -- common/autotest_common.sh@940 -- # kill -0 78341 00:22:58.981 20:14:40 -- common/autotest_common.sh@941 -- # uname 00:22:58.981 20:14:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:58.981 20:14:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78341 00:22:58.981 killing process with pid 78341 00:22:58.981 Received shutdown signal, test time was about 10.000000 seconds 00:22:58.981 00:22:58.981 Latency(us) 00:22:58.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.981 =================================================================================================================== 00:22:58.981 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:58.981 20:14:40 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:58.981 20:14:40 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:58.981 20:14:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78341' 00:22:58.981 20:14:40 -- common/autotest_common.sh@955 -- # kill 78341 00:22:58.981 20:14:40 -- common/autotest_common.sh@960 -- # wait 78341 00:22:58.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.981 20:14:40 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:58.981 20:14:40 -- host/timeout.sh@110 -- # bdevperf_pid=78583 00:22:58.981 20:14:40 -- host/timeout.sh@112 -- # waitforlisten 78583 /var/tmp/bdevperf.sock 00:22:58.981 20:14:40 -- common/autotest_common.sh@817 -- # '[' -z 78583 ']' 00:22:58.981 20:14:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.981 20:14:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:58.981 20:14:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.982 20:14:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:58.982 20:14:40 -- common/autotest_common.sh@10 -- # set +x 00:22:58.982 [2024-04-24 20:14:41.004975] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:22:58.982 [2024-04-24 20:14:41.005615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78583 ] 00:22:58.982 [2024-04-24 20:14:41.151491] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.248 [2024-04-24 20:14:41.256570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.812 20:14:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:59.812 20:14:41 -- common/autotest_common.sh@850 -- # return 0 00:22:59.812 20:14:41 -- host/timeout.sh@116 -- # dtrace_pid=78595 00:22:59.812 20:14:41 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 78583 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:59.812 20:14:41 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:00.069 20:14:42 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:00.327 NVMe0n1 00:23:00.327 20:14:42 -- host/timeout.sh@124 -- # rpc_pid=78642 00:23:00.327 20:14:42 -- host/timeout.sh@125 -- # sleep 1 00:23:00.327 20:14:42 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:00.585 Running I/O for 10 seconds... 00:23:01.521 20:14:43 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.521 [2024-04-24 20:14:43.689591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689669] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689705] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689748] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689786] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689813] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689928] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.689996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.690001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.690007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.690013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.690018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.690024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.690029] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.690034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.690040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.521 [2024-04-24 20:14:43.690045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690292] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5e10 is same with the state(5) to be set 00:23:01.522 [2024-04-24 20:14:43.690420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.522 [2024-04-24 20:14:43.690448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.522 [2024-04-24 20:14:43.690469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.522 [2024-04-24 20:14:43.690476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.522 [2024-04-24 20:14:43.690500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.522 [2024-04-24 20:14:43.690507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.522 [2024-04-24 20:14:43.690519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.522 [2024-04-24 20:14:43.690525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.522 [2024-04-24 20:14:43.690534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.522 [2024-04-24 20:14:43.690540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.522 [2024-04-24 20:14:43.690551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.522 [2024-04-24 20:14:43.690558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.522 [2024-04-24 20:14:43.690567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.522 [2024-04-24 20:14:43.690573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.522 [2024-04-24 20:14:43.690584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.522 [2024-04-24 20:14:43.690591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.522 [2024-04-24 20:14:43.690600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.522 [2024-04-24 20:14:43.690606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.522 [2024-04-24 20:14:43.690617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.522 [2024-04-24 20:14:43.690623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.522 [2024-04-24 20:14:43.690632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.522 [2024-04-24 20:14:43.690646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.522 [2024-04-24 20:14:43.690654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.522 [2024-04-24 20:14:43.690661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.522 [2024-04-24 20:14:43.690669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.522 [2024-04-24 20:14:43.690675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.522 [2024-04-24 20:14:43.690705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.522 [2024-04-24 20:14:43.690724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.522 [2024-04-24 20:14:43.690732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.522 [2024-04-24 20:14:43.690741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.690750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.690757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.690768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.690776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.690784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.690790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.690801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.690808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.690816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.690823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.690834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.690840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.690849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.690855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.690863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.690872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.690881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.690887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.690898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.690904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.690912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.690918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.690927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.690933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.690949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.690969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.690977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.690984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.523 [2024-04-24 20:14:43.691436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.523 [2024-04-24 20:14:43.691445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.691992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.691998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.692009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.692015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.692024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.692030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.692038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.692044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.692065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.692074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.692082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.692088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.692100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.692106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.692114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.692121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.524 [2024-04-24 20:14:43.692129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.524 [2024-04-24 20:14:43.692136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:01.525 [2024-04-24 20:14:43.692778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.525 [2024-04-24 20:14:43.692788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1905fa0 is same with the state(5) to be set 00:23:01.525 [2024-04-24 20:14:43.692798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:01.526 [2024-04-24 20:14:43.692806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:01.526 [2024-04-24 20:14:43.692812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24528 len:8 PRP1 0x0 PRP2 0x0 00:23:01.526 [2024-04-24 20:14:43.692818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:01.526 [2024-04-24 20:14:43.692870] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1905fa0 was disconnected and freed. reset controller. 00:23:01.526 [2024-04-24 20:14:43.693134] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:01.526 [2024-04-24 20:14:43.693212] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c3030 (9): Bad file descriptor 00:23:01.526 [2024-04-24 20:14:43.693299] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.526 [2024-04-24 20:14:43.693344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.526 [2024-04-24 20:14:43.693370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.526 [2024-04-24 20:14:43.693391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c3030 with addr=10.0.0.2, port=4420 00:23:01.526 [2024-04-24 20:14:43.693400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3030 is same with the state(5) to be set 00:23:01.526 [2024-04-24 20:14:43.693413] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c3030 (9): Bad file descriptor 00:23:01.526 [2024-04-24 20:14:43.693425] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:01.526 [2024-04-24 20:14:43.693431] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:01.526 [2024-04-24 20:14:43.693439] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:01.526 [2024-04-24 20:14:43.693455] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.526 [2024-04-24 20:14:43.693463] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:01.526 20:14:43 -- host/timeout.sh@128 -- # wait 78642 00:23:04.053 [2024-04-24 20:14:45.689842] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.053 [2024-04-24 20:14:45.689938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.053 [2024-04-24 20:14:45.689966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.053 [2024-04-24 20:14:45.689976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c3030 with addr=10.0.0.2, port=4420 00:23:04.053 [2024-04-24 20:14:45.689988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3030 is same with the state(5) to be set 00:23:04.053 [2024-04-24 20:14:45.690011] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c3030 (9): Bad file descriptor 00:23:04.053 [2024-04-24 20:14:45.690038] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:04.053 [2024-04-24 20:14:45.690072] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:04.053 [2024-04-24 20:14:45.690083] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:04.053 [2024-04-24 20:14:45.690106] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:04.053 [2024-04-24 20:14:45.690115] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:05.953 [2024-04-24 20:14:47.686446] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.953 [2024-04-24 20:14:47.686543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.953 [2024-04-24 20:14:47.686570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.953 [2024-04-24 20:14:47.686581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c3030 with addr=10.0.0.2, port=4420 00:23:05.953 [2024-04-24 20:14:47.686592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c3030 is same with the state(5) to be set 00:23:05.953 [2024-04-24 20:14:47.686614] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c3030 (9): Bad file descriptor 00:23:05.953 [2024-04-24 20:14:47.686630] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:05.953 [2024-04-24 20:14:47.686636] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:05.953 [2024-04-24 20:14:47.686653] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:05.953 [2024-04-24 20:14:47.686677] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:05.953 [2024-04-24 20:14:47.686707] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:07.857 [2024-04-24 20:14:49.682949] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.796 00:23:08.796 Latency(us) 00:23:08.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.796 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:08.796 NVMe0n1 : 8.11 2066.55 8.07 15.78 0.00 61508.42 7841.43 7033243.39 00:23:08.796 =================================================================================================================== 00:23:08.796 Total : 2066.55 8.07 15.78 0.00 61508.42 7841.43 7033243.39 00:23:08.796 0 00:23:08.796 20:14:50 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:08.796 Attaching 5 probes... 00:23:08.796 1158.839962: reset bdev controller NVMe0 00:23:08.796 1158.960100: reconnect bdev controller NVMe0 00:23:08.796 3155.420566: reconnect delay bdev controller NVMe0 00:23:08.796 3155.444002: reconnect bdev controller NVMe0 00:23:08.796 5152.019399: reconnect delay bdev controller NVMe0 00:23:08.796 5152.045365: reconnect bdev controller NVMe0 00:23:08.796 7148.620969: reconnect delay bdev controller NVMe0 00:23:08.796 7148.645761: reconnect bdev controller NVMe0 00:23:08.796 20:14:50 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:08.796 20:14:50 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:08.796 20:14:50 -- host/timeout.sh@136 -- # kill 78595 00:23:08.796 20:14:50 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:08.796 20:14:50 -- host/timeout.sh@139 -- # killprocess 78583 00:23:08.796 20:14:50 -- common/autotest_common.sh@936 -- # '[' -z 78583 ']' 00:23:08.796 20:14:50 -- common/autotest_common.sh@940 -- # kill -0 78583 00:23:08.796 20:14:50 -- common/autotest_common.sh@941 -- # uname 00:23:08.796 20:14:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:08.796 20:14:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78583 00:23:08.796 killing process with pid 78583 00:23:08.796 Received shutdown signal, test time was about 8.193734 seconds 00:23:08.796 00:23:08.796 Latency(us) 00:23:08.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.796 =================================================================================================================== 00:23:08.796 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.796 20:14:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:08.796 20:14:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:08.796 20:14:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78583' 00:23:08.796 20:14:50 -- common/autotest_common.sh@955 -- # kill 78583 00:23:08.796 20:14:50 -- common/autotest_common.sh@960 -- # wait 78583 00:23:08.796 20:14:50 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:09.054 20:14:51 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:09.054 20:14:51 -- host/timeout.sh@145 -- # nvmftestfini 00:23:09.054 20:14:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:09.054 20:14:51 -- nvmf/common.sh@117 -- # sync 00:23:09.054 20:14:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:09.054 20:14:51 -- nvmf/common.sh@120 -- # set +e 00:23:09.054 20:14:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:09.054 20:14:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:09.054 rmmod nvme_tcp 00:23:09.054 rmmod nvme_fabrics 00:23:09.313 rmmod nvme_keyring 00:23:09.313 20:14:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:09.313 20:14:51 -- nvmf/common.sh@124 -- # set -e 00:23:09.313 20:14:51 -- nvmf/common.sh@125 -- # return 0 00:23:09.313 20:14:51 -- nvmf/common.sh@478 -- # '[' -n 78140 ']' 00:23:09.313 20:14:51 -- nvmf/common.sh@479 -- # killprocess 78140 00:23:09.313 20:14:51 -- common/autotest_common.sh@936 -- # '[' -z 78140 ']' 00:23:09.313 20:14:51 -- common/autotest_common.sh@940 -- # kill -0 78140 00:23:09.313 20:14:51 -- common/autotest_common.sh@941 -- # uname 00:23:09.313 20:14:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:09.313 20:14:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78140 00:23:09.313 killing process with pid 78140 00:23:09.313 20:14:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:09.313 20:14:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:09.313 20:14:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78140' 00:23:09.313 20:14:51 -- common/autotest_common.sh@955 -- # kill 78140 00:23:09.313 [2024-04-24 20:14:51.362513] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:09.313 20:14:51 -- common/autotest_common.sh@960 -- # wait 78140 00:23:09.573 20:14:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:09.573 20:14:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:09.573 20:14:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:09.573 20:14:51 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:09.573 20:14:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:09.573 20:14:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.573 20:14:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.573 20:14:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.573 20:14:51 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:09.573 00:23:09.573 real 0m46.677s 00:23:09.573 user 2m17.301s 00:23:09.573 sys 0m5.141s 00:23:09.573 20:14:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:09.573 20:14:51 -- common/autotest_common.sh@10 -- # set +x 00:23:09.573 ************************************ 00:23:09.573 END TEST nvmf_timeout 00:23:09.573 ************************************ 00:23:09.573 20:14:51 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:23:09.573 20:14:51 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:23:09.573 20:14:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:09.573 20:14:51 -- common/autotest_common.sh@10 -- # set +x 00:23:09.573 20:14:51 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:23:09.573 00:23:09.573 real 8m34.204s 00:23:09.573 user 20m20.118s 00:23:09.573 sys 2m10.953s 00:23:09.573 20:14:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:09.573 20:14:51 -- common/autotest_common.sh@10 -- # set +x 00:23:09.573 ************************************ 00:23:09.573 END TEST nvmf_tcp 00:23:09.573 ************************************ 00:23:09.573 20:14:51 -- spdk/autotest.sh@286 -- # [[ 1 -eq 0 ]] 00:23:09.573 20:14:51 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:09.573 20:14:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:09.573 20:14:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:09.573 20:14:51 -- common/autotest_common.sh@10 -- # set +x 00:23:09.833 ************************************ 00:23:09.833 START TEST nvmf_dif 00:23:09.833 ************************************ 00:23:09.833 20:14:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:09.833 * Looking for test storage... 00:23:09.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:09.833 20:14:51 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:09.833 20:14:51 -- nvmf/common.sh@7 -- # uname -s 00:23:09.833 20:14:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.833 20:14:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.833 20:14:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.833 20:14:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.833 20:14:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.833 20:14:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.833 20:14:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.833 20:14:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.833 20:14:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.833 20:14:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.833 20:14:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:23:09.833 20:14:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:23:09.833 20:14:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.833 20:14:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.833 20:14:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:09.833 20:14:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.833 20:14:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:09.833 20:14:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.833 20:14:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.833 20:14:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.833 20:14:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.833 20:14:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.833 20:14:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.833 20:14:52 -- paths/export.sh@5 -- # export PATH 00:23:09.833 20:14:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.833 20:14:52 -- nvmf/common.sh@47 -- # : 0 00:23:09.833 20:14:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:09.833 20:14:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:09.833 20:14:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.833 20:14:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.833 20:14:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.833 20:14:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:09.833 20:14:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:09.833 20:14:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:09.833 20:14:52 -- target/dif.sh@15 -- # NULL_META=16 00:23:09.833 20:14:52 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:09.833 20:14:52 -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:09.833 20:14:52 -- target/dif.sh@15 -- # NULL_DIF=1 00:23:09.833 20:14:52 -- target/dif.sh@135 -- # nvmftestinit 00:23:09.833 20:14:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:09.833 20:14:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.833 20:14:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:09.833 20:14:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:09.833 20:14:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:09.833 20:14:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.833 20:14:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:09.833 20:14:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.833 20:14:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:09.833 20:14:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:09.833 20:14:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:09.833 20:14:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:09.833 20:14:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:09.833 20:14:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:09.833 20:14:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.833 20:14:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.833 20:14:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:09.833 20:14:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:09.833 20:14:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:09.833 20:14:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:09.833 20:14:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:09.833 20:14:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.833 20:14:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:09.833 20:14:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:09.833 20:14:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:09.833 20:14:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:09.833 20:14:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:09.833 20:14:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:09.833 Cannot find device "nvmf_tgt_br" 00:23:09.833 20:14:52 -- nvmf/common.sh@155 -- # true 00:23:09.833 20:14:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:09.833 Cannot find device "nvmf_tgt_br2" 00:23:09.833 20:14:52 -- nvmf/common.sh@156 -- # true 00:23:09.833 20:14:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:09.833 20:14:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:10.092 Cannot find device "nvmf_tgt_br" 00:23:10.092 20:14:52 -- nvmf/common.sh@158 -- # true 00:23:10.092 20:14:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:10.092 Cannot find device "nvmf_tgt_br2" 00:23:10.092 20:14:52 -- nvmf/common.sh@159 -- # true 00:23:10.092 20:14:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:10.092 20:14:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:10.092 20:14:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:10.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:10.092 20:14:52 -- nvmf/common.sh@162 -- # true 00:23:10.092 20:14:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:10.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:10.092 20:14:52 -- nvmf/common.sh@163 -- # true 00:23:10.092 20:14:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:10.092 20:14:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:10.092 20:14:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:10.092 20:14:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:10.092 20:14:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:10.092 20:14:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:10.092 20:14:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:10.092 20:14:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:10.092 20:14:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:10.092 20:14:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:10.092 20:14:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:10.092 20:14:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:10.092 20:14:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:10.092 20:14:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:10.092 20:14:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:10.092 20:14:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:10.092 20:14:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:10.092 20:14:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:10.092 20:14:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:10.092 20:14:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:10.092 20:14:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:10.092 20:14:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:10.092 20:14:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:10.092 20:14:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:10.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:23:10.093 00:23:10.093 --- 10.0.0.2 ping statistics --- 00:23:10.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.093 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:23:10.093 20:14:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:10.093 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:10.093 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:23:10.093 00:23:10.093 --- 10.0.0.3 ping statistics --- 00:23:10.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.093 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:23:10.093 20:14:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:10.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:23:10.093 00:23:10.093 --- 10.0.0.1 ping statistics --- 00:23:10.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.093 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:23:10.093 20:14:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.093 20:14:52 -- nvmf/common.sh@422 -- # return 0 00:23:10.093 20:14:52 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:23:10.093 20:14:52 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:10.666 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:10.667 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:10.667 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:10.667 20:14:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.667 20:14:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:10.667 20:14:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:10.667 20:14:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.667 20:14:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:10.667 20:14:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:10.667 20:14:52 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:10.667 20:14:52 -- target/dif.sh@137 -- # nvmfappstart 00:23:10.667 20:14:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:10.667 20:14:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:10.667 20:14:52 -- common/autotest_common.sh@10 -- # set +x 00:23:10.667 20:14:52 -- nvmf/common.sh@470 -- # nvmfpid=79087 00:23:10.667 20:14:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:10.667 20:14:52 -- nvmf/common.sh@471 -- # waitforlisten 79087 00:23:10.667 20:14:52 -- common/autotest_common.sh@817 -- # '[' -z 79087 ']' 00:23:10.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.667 20:14:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.667 20:14:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:10.667 20:14:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.667 20:14:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:10.667 20:14:52 -- common/autotest_common.sh@10 -- # set +x 00:23:10.667 [2024-04-24 20:14:52.897104] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:23:10.667 [2024-04-24 20:14:52.897196] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.939 [2024-04-24 20:14:53.036787] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.939 [2024-04-24 20:14:53.151483] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.939 [2024-04-24 20:14:53.151540] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.939 [2024-04-24 20:14:53.151548] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.939 [2024-04-24 20:14:53.151554] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.939 [2024-04-24 20:14:53.151559] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.939 [2024-04-24 20:14:53.151591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.876 20:14:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:11.876 20:14:53 -- common/autotest_common.sh@850 -- # return 0 00:23:11.876 20:14:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:11.876 20:14:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:11.876 20:14:53 -- common/autotest_common.sh@10 -- # set +x 00:23:11.876 20:14:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.876 20:14:53 -- target/dif.sh@139 -- # create_transport 00:23:11.876 20:14:53 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:11.876 20:14:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.876 20:14:53 -- common/autotest_common.sh@10 -- # set +x 00:23:11.876 [2024-04-24 20:14:53.831552] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.876 20:14:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.876 20:14:53 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:11.876 20:14:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:11.876 20:14:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:11.876 20:14:53 -- common/autotest_common.sh@10 -- # set +x 00:23:11.876 ************************************ 00:23:11.876 START TEST fio_dif_1_default 00:23:11.876 ************************************ 00:23:11.876 20:14:53 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:23:11.876 20:14:53 -- target/dif.sh@86 -- # create_subsystems 0 00:23:11.876 20:14:53 -- target/dif.sh@28 -- # local sub 00:23:11.876 20:14:53 -- target/dif.sh@30 -- # for sub in "$@" 00:23:11.876 20:14:53 -- target/dif.sh@31 -- # create_subsystem 0 00:23:11.876 20:14:53 -- target/dif.sh@18 -- # local sub_id=0 00:23:11.876 20:14:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:11.876 20:14:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.876 20:14:53 -- common/autotest_common.sh@10 -- # set +x 00:23:11.876 bdev_null0 00:23:11.876 20:14:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.876 20:14:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:11.876 20:14:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.876 20:14:53 -- common/autotest_common.sh@10 -- # set +x 00:23:11.876 20:14:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.876 20:14:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:11.876 20:14:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.876 20:14:53 -- common/autotest_common.sh@10 -- # set +x 00:23:11.876 20:14:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.876 20:14:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:11.876 20:14:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.876 20:14:53 -- common/autotest_common.sh@10 -- # set +x 00:23:11.876 [2024-04-24 20:14:53.931332] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:11.876 [2024-04-24 20:14:53.931742] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.876 20:14:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.876 20:14:53 -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:11.876 20:14:53 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:11.876 20:14:53 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:11.876 20:14:53 -- nvmf/common.sh@521 -- # config=() 00:23:11.876 20:14:53 -- nvmf/common.sh@521 -- # local subsystem config 00:23:11.876 20:14:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:11.876 20:14:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:11.876 { 00:23:11.876 "params": { 00:23:11.876 "name": "Nvme$subsystem", 00:23:11.876 "trtype": "$TEST_TRANSPORT", 00:23:11.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.876 "adrfam": "ipv4", 00:23:11.876 "trsvcid": "$NVMF_PORT", 00:23:11.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.876 "hdgst": ${hdgst:-false}, 00:23:11.876 "ddgst": ${ddgst:-false} 00:23:11.876 }, 00:23:11.876 "method": "bdev_nvme_attach_controller" 00:23:11.876 } 00:23:11.876 EOF 00:23:11.876 )") 00:23:11.876 20:14:53 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:11.876 20:14:53 -- target/dif.sh@82 -- # gen_fio_conf 00:23:11.876 20:14:53 -- nvmf/common.sh@543 -- # cat 00:23:11.876 20:14:53 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:11.876 20:14:53 -- target/dif.sh@54 -- # local file 00:23:11.876 20:14:53 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:11.876 20:14:53 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:11.876 20:14:53 -- target/dif.sh@56 -- # cat 00:23:11.876 20:14:53 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:11.876 20:14:53 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:11.876 20:14:53 -- common/autotest_common.sh@1327 -- # shift 00:23:11.876 20:14:53 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:11.876 20:14:53 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:11.876 20:14:53 -- nvmf/common.sh@545 -- # jq . 00:23:11.876 20:14:53 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:11.876 20:14:53 -- target/dif.sh@72 -- # (( file <= files )) 00:23:11.876 20:14:53 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:11.876 20:14:53 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:11.876 20:14:53 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:11.876 20:14:53 -- nvmf/common.sh@546 -- # IFS=, 00:23:11.876 20:14:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:11.876 "params": { 00:23:11.876 "name": "Nvme0", 00:23:11.876 "trtype": "tcp", 00:23:11.876 "traddr": "10.0.0.2", 00:23:11.876 "adrfam": "ipv4", 00:23:11.876 "trsvcid": "4420", 00:23:11.876 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:11.876 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:11.876 "hdgst": false, 00:23:11.876 "ddgst": false 00:23:11.876 }, 00:23:11.876 "method": "bdev_nvme_attach_controller" 00:23:11.876 }' 00:23:11.876 20:14:53 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:11.876 20:14:53 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:11.876 20:14:53 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:11.876 20:14:53 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:11.876 20:14:53 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:11.876 20:14:53 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:11.876 20:14:53 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:11.876 20:14:53 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:11.876 20:14:53 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:11.876 20:14:53 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:12.136 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:12.136 fio-3.35 00:23:12.136 Starting 1 thread 00:23:24.479 00:23:24.479 filename0: (groupid=0, jobs=1): err= 0: pid=79159: Wed Apr 24 20:15:04 2024 00:23:24.479 read: IOPS=9826, BW=38.4MiB/s (40.2MB/s)(384MiB/10001msec) 00:23:24.479 slat (usec): min=5, max=121, avg= 7.69, stdev= 5.52 00:23:24.479 clat (usec): min=291, max=1506, avg=385.70, stdev=61.18 00:23:24.479 lat (usec): min=297, max=1612, avg=393.39, stdev=65.67 00:23:24.479 clat percentiles (usec): 00:23:24.479 | 1.00th=[ 310], 5.00th=[ 334], 10.00th=[ 351], 20.00th=[ 363], 00:23:24.479 | 30.00th=[ 375], 40.00th=[ 379], 50.00th=[ 383], 60.00th=[ 392], 00:23:24.479 | 70.00th=[ 396], 80.00th=[ 400], 90.00th=[ 408], 95.00th=[ 416], 00:23:24.479 | 99.00th=[ 482], 99.50th=[ 709], 99.90th=[ 1287], 99.95th=[ 1352], 00:23:24.479 | 99.99th=[ 1434] 00:23:24.479 bw ( KiB/s): min=28224, max=42720, per=99.84%, avg=39242.11, stdev=3077.77, samples=19 00:23:24.479 iops : min= 7056, max=10680, avg=9810.53, stdev=769.44, samples=19 00:23:24.479 lat (usec) : 500=99.20%, 750=0.32%, 1000=0.07% 00:23:24.479 lat (msec) : 2=0.41% 00:23:24.479 cpu : usr=86.33%, sys=12.04%, ctx=110, majf=0, minf=0 00:23:24.479 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:24.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.479 issued rwts: total=98276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:24.479 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:24.479 00:23:24.479 Run status group 0 (all jobs): 00:23:24.479 READ: bw=38.4MiB/s (40.2MB/s), 38.4MiB/s-38.4MiB/s (40.2MB/s-40.2MB/s), io=384MiB (403MB), run=10001-10001msec 00:23:24.479 20:15:04 -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:24.479 20:15:04 -- target/dif.sh@43 -- # local sub 00:23:24.479 20:15:04 -- target/dif.sh@45 -- # for sub in "$@" 00:23:24.479 20:15:04 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:24.479 20:15:04 -- target/dif.sh@36 -- # local sub_id=0 00:23:24.479 20:15:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:24.479 20:15:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.479 20:15:04 -- common/autotest_common.sh@10 -- # set +x 00:23:24.479 20:15:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.479 20:15:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:24.479 20:15:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.479 20:15:04 -- common/autotest_common.sh@10 -- # set +x 00:23:24.479 ************************************ 00:23:24.479 END TEST fio_dif_1_default 00:23:24.479 ************************************ 00:23:24.479 20:15:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.479 00:23:24.479 real 0m10.977s 00:23:24.479 user 0m9.260s 00:23:24.479 sys 0m1.442s 00:23:24.479 20:15:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:24.479 20:15:04 -- common/autotest_common.sh@10 -- # set +x 00:23:24.479 20:15:04 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:24.479 20:15:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:24.479 20:15:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:24.479 20:15:04 -- common/autotest_common.sh@10 -- # set +x 00:23:24.479 ************************************ 00:23:24.479 START TEST fio_dif_1_multi_subsystems 00:23:24.479 ************************************ 00:23:24.479 20:15:05 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:23:24.479 20:15:05 -- target/dif.sh@92 -- # local files=1 00:23:24.479 20:15:05 -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:24.479 20:15:05 -- target/dif.sh@28 -- # local sub 00:23:24.479 20:15:05 -- target/dif.sh@30 -- # for sub in "$@" 00:23:24.479 20:15:05 -- target/dif.sh@31 -- # create_subsystem 0 00:23:24.479 20:15:05 -- target/dif.sh@18 -- # local sub_id=0 00:23:24.479 20:15:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:24.480 20:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.480 20:15:05 -- common/autotest_common.sh@10 -- # set +x 00:23:24.480 bdev_null0 00:23:24.480 20:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.480 20:15:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:24.480 20:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.480 20:15:05 -- common/autotest_common.sh@10 -- # set +x 00:23:24.480 20:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.480 20:15:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:24.480 20:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.480 20:15:05 -- common/autotest_common.sh@10 -- # set +x 00:23:24.480 20:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.480 20:15:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:24.480 20:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.480 20:15:05 -- common/autotest_common.sh@10 -- # set +x 00:23:24.480 [2024-04-24 20:15:05.051868] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.480 20:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.480 20:15:05 -- target/dif.sh@30 -- # for sub in "$@" 00:23:24.480 20:15:05 -- target/dif.sh@31 -- # create_subsystem 1 00:23:24.480 20:15:05 -- target/dif.sh@18 -- # local sub_id=1 00:23:24.480 20:15:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:24.480 20:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.480 20:15:05 -- common/autotest_common.sh@10 -- # set +x 00:23:24.480 bdev_null1 00:23:24.480 20:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.480 20:15:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:24.480 20:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.480 20:15:05 -- common/autotest_common.sh@10 -- # set +x 00:23:24.480 20:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.480 20:15:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:24.480 20:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.480 20:15:05 -- common/autotest_common.sh@10 -- # set +x 00:23:24.480 20:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.480 20:15:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.480 20:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.480 20:15:05 -- common/autotest_common.sh@10 -- # set +x 00:23:24.480 20:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.480 20:15:05 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:24.480 20:15:05 -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:24.480 20:15:05 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:24.480 20:15:05 -- nvmf/common.sh@521 -- # config=() 00:23:24.480 20:15:05 -- nvmf/common.sh@521 -- # local subsystem config 00:23:24.480 20:15:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:24.480 20:15:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:24.480 { 00:23:24.480 "params": { 00:23:24.480 "name": "Nvme$subsystem", 00:23:24.480 "trtype": "$TEST_TRANSPORT", 00:23:24.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.480 "adrfam": "ipv4", 00:23:24.480 "trsvcid": "$NVMF_PORT", 00:23:24.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.480 "hdgst": ${hdgst:-false}, 00:23:24.480 "ddgst": ${ddgst:-false} 00:23:24.480 }, 00:23:24.480 "method": "bdev_nvme_attach_controller" 00:23:24.480 } 00:23:24.480 EOF 00:23:24.480 )") 00:23:24.480 20:15:05 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:24.480 20:15:05 -- target/dif.sh@82 -- # gen_fio_conf 00:23:24.480 20:15:05 -- nvmf/common.sh@543 -- # cat 00:23:24.480 20:15:05 -- target/dif.sh@54 -- # local file 00:23:24.480 20:15:05 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:24.480 20:15:05 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:24.480 20:15:05 -- target/dif.sh@56 -- # cat 00:23:24.480 20:15:05 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:24.480 20:15:05 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:24.480 20:15:05 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:24.480 20:15:05 -- common/autotest_common.sh@1327 -- # shift 00:23:24.480 20:15:05 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:24.480 20:15:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:24.480 20:15:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:24.480 { 00:23:24.480 "params": { 00:23:24.480 "name": "Nvme$subsystem", 00:23:24.480 "trtype": "$TEST_TRANSPORT", 00:23:24.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.480 "adrfam": "ipv4", 00:23:24.480 "trsvcid": "$NVMF_PORT", 00:23:24.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.480 "hdgst": ${hdgst:-false}, 00:23:24.480 "ddgst": ${ddgst:-false} 00:23:24.480 }, 00:23:24.480 "method": "bdev_nvme_attach_controller" 00:23:24.480 } 00:23:24.480 EOF 00:23:24.480 )") 00:23:24.480 20:15:05 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:24.480 20:15:05 -- nvmf/common.sh@543 -- # cat 00:23:24.480 20:15:05 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:24.480 20:15:05 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:24.480 20:15:05 -- target/dif.sh@72 -- # (( file <= files )) 00:23:24.480 20:15:05 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:24.480 20:15:05 -- target/dif.sh@73 -- # cat 00:23:24.480 20:15:05 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:24.480 20:15:05 -- nvmf/common.sh@545 -- # jq . 00:23:24.480 20:15:05 -- nvmf/common.sh@546 -- # IFS=, 00:23:24.480 20:15:05 -- target/dif.sh@72 -- # (( file++ )) 00:23:24.480 20:15:05 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:24.480 "params": { 00:23:24.480 "name": "Nvme0", 00:23:24.480 "trtype": "tcp", 00:23:24.480 "traddr": "10.0.0.2", 00:23:24.480 "adrfam": "ipv4", 00:23:24.480 "trsvcid": "4420", 00:23:24.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:24.480 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:24.480 "hdgst": false, 00:23:24.480 "ddgst": false 00:23:24.480 }, 00:23:24.480 "method": "bdev_nvme_attach_controller" 00:23:24.480 },{ 00:23:24.480 "params": { 00:23:24.480 "name": "Nvme1", 00:23:24.480 "trtype": "tcp", 00:23:24.480 "traddr": "10.0.0.2", 00:23:24.480 "adrfam": "ipv4", 00:23:24.480 "trsvcid": "4420", 00:23:24.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:24.480 "hdgst": false, 00:23:24.480 "ddgst": false 00:23:24.480 }, 00:23:24.480 "method": "bdev_nvme_attach_controller" 00:23:24.480 }' 00:23:24.480 20:15:05 -- target/dif.sh@72 -- # (( file <= files )) 00:23:24.480 20:15:05 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:24.480 20:15:05 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:24.480 20:15:05 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:24.480 20:15:05 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:24.480 20:15:05 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:24.480 20:15:05 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:24.480 20:15:05 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:24.480 20:15:05 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:24.480 20:15:05 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:24.480 20:15:05 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:24.480 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:24.480 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:24.480 fio-3.35 00:23:24.480 Starting 2 threads 00:23:34.504 00:23:34.504 filename0: (groupid=0, jobs=1): err= 0: pid=79327: Wed Apr 24 20:15:15 2024 00:23:34.504 read: IOPS=5233, BW=20.4MiB/s (21.4MB/s)(204MiB/10001msec) 00:23:34.504 slat (usec): min=4, max=145, avg=13.20, stdev= 3.79 00:23:34.504 clat (usec): min=526, max=3839, avg=728.44, stdev=97.48 00:23:34.504 lat (usec): min=532, max=3866, avg=741.63, stdev=98.37 00:23:34.504 clat percentiles (usec): 00:23:34.504 | 1.00th=[ 578], 5.00th=[ 611], 10.00th=[ 635], 20.00th=[ 676], 00:23:34.504 | 30.00th=[ 701], 40.00th=[ 717], 50.00th=[ 734], 60.00th=[ 742], 00:23:34.504 | 70.00th=[ 758], 80.00th=[ 766], 90.00th=[ 791], 95.00th=[ 807], 00:23:34.504 | 99.00th=[ 938], 99.50th=[ 1287], 99.90th=[ 1991], 99.95th=[ 2409], 00:23:34.504 | 99.99th=[ 3687] 00:23:34.504 bw ( KiB/s): min=18240, max=23264, per=49.90%, avg=20906.11, stdev=1218.71, samples=19 00:23:34.504 iops : min= 4560, max= 5816, avg=5226.53, stdev=304.68, samples=19 00:23:34.504 lat (usec) : 750=65.05%, 1000=34.19% 00:23:34.504 lat (msec) : 2=0.66%, 4=0.10% 00:23:34.505 cpu : usr=92.38%, sys=6.47%, ctx=100, majf=0, minf=9 00:23:34.505 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.505 issued rwts: total=52336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.505 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:34.505 filename1: (groupid=0, jobs=1): err= 0: pid=79328: Wed Apr 24 20:15:15 2024 00:23:34.505 read: IOPS=5240, BW=20.5MiB/s (21.5MB/s)(205MiB/10001msec) 00:23:34.505 slat (nsec): min=5080, max=45653, avg=13134.01, stdev=3464.98 00:23:34.505 clat (usec): min=353, max=2800, avg=727.52, stdev=91.92 00:23:34.505 lat (usec): min=359, max=2819, avg=740.65, stdev=92.46 00:23:34.505 clat percentiles (usec): 00:23:34.505 | 1.00th=[ 578], 5.00th=[ 611], 10.00th=[ 644], 20.00th=[ 676], 00:23:34.505 | 30.00th=[ 701], 40.00th=[ 717], 50.00th=[ 734], 60.00th=[ 742], 00:23:34.505 | 70.00th=[ 750], 80.00th=[ 766], 90.00th=[ 783], 95.00th=[ 807], 00:23:34.505 | 99.00th=[ 922], 99.50th=[ 1303], 99.90th=[ 1696], 99.95th=[ 2442], 00:23:34.505 | 99.99th=[ 2671] 00:23:34.505 bw ( KiB/s): min=18272, max=23264, per=49.97%, avg=20935.16, stdev=1202.76, samples=19 00:23:34.505 iops : min= 4568, max= 5816, avg=5233.79, stdev=300.69, samples=19 00:23:34.505 lat (usec) : 500=0.14%, 750=67.27%, 1000=31.90% 00:23:34.505 lat (msec) : 2=0.59%, 4=0.10% 00:23:34.505 cpu : usr=92.72%, sys=6.19%, ctx=6, majf=0, minf=0 00:23:34.505 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:34.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:34.505 issued rwts: total=52412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:34.505 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:34.505 00:23:34.505 Run status group 0 (all jobs): 00:23:34.505 READ: bw=40.9MiB/s (42.9MB/s), 20.4MiB/s-20.5MiB/s (21.4MB/s-21.5MB/s), io=409MiB (429MB), run=10001-10001msec 00:23:34.505 20:15:16 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:34.505 20:15:16 -- target/dif.sh@43 -- # local sub 00:23:34.505 20:15:16 -- target/dif.sh@45 -- # for sub in "$@" 00:23:34.505 20:15:16 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:34.505 20:15:16 -- target/dif.sh@36 -- # local sub_id=0 00:23:34.505 20:15:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:34.505 20:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.505 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:34.505 20:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.505 20:15:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:34.505 20:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.505 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:34.505 20:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.505 20:15:16 -- target/dif.sh@45 -- # for sub in "$@" 00:23:34.505 20:15:16 -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:34.505 20:15:16 -- target/dif.sh@36 -- # local sub_id=1 00:23:34.505 20:15:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:34.505 20:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.505 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:34.505 20:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.505 20:15:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:34.505 20:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.505 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:34.505 ************************************ 00:23:34.505 END TEST fio_dif_1_multi_subsystems 00:23:34.505 ************************************ 00:23:34.505 20:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.505 00:23:34.505 real 0m11.121s 00:23:34.505 user 0m19.223s 00:23:34.505 sys 0m1.529s 00:23:34.505 20:15:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:34.505 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:34.505 20:15:16 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:34.505 20:15:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:34.505 20:15:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:34.505 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:34.505 ************************************ 00:23:34.505 START TEST fio_dif_rand_params 00:23:34.505 ************************************ 00:23:34.505 20:15:16 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:23:34.505 20:15:16 -- target/dif.sh@100 -- # local NULL_DIF 00:23:34.505 20:15:16 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:34.505 20:15:16 -- target/dif.sh@103 -- # NULL_DIF=3 00:23:34.505 20:15:16 -- target/dif.sh@103 -- # bs=128k 00:23:34.505 20:15:16 -- target/dif.sh@103 -- # numjobs=3 00:23:34.505 20:15:16 -- target/dif.sh@103 -- # iodepth=3 00:23:34.505 20:15:16 -- target/dif.sh@103 -- # runtime=5 00:23:34.505 20:15:16 -- target/dif.sh@105 -- # create_subsystems 0 00:23:34.505 20:15:16 -- target/dif.sh@28 -- # local sub 00:23:34.505 20:15:16 -- target/dif.sh@30 -- # for sub in "$@" 00:23:34.505 20:15:16 -- target/dif.sh@31 -- # create_subsystem 0 00:23:34.505 20:15:16 -- target/dif.sh@18 -- # local sub_id=0 00:23:34.505 20:15:16 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:34.505 20:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.505 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:34.505 bdev_null0 00:23:34.505 20:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.505 20:15:16 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:34.505 20:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.505 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:34.505 20:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.505 20:15:16 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:34.505 20:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.505 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:34.505 20:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.505 20:15:16 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:34.505 20:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.505 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:34.505 [2024-04-24 20:15:16.311724] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.505 20:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.505 20:15:16 -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:34.505 20:15:16 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:34.505 20:15:16 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:34.505 20:15:16 -- nvmf/common.sh@521 -- # config=() 00:23:34.505 20:15:16 -- nvmf/common.sh@521 -- # local subsystem config 00:23:34.505 20:15:16 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.505 20:15:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:34.505 20:15:16 -- target/dif.sh@82 -- # gen_fio_conf 00:23:34.505 20:15:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:34.505 { 00:23:34.505 "params": { 00:23:34.505 "name": "Nvme$subsystem", 00:23:34.505 "trtype": "$TEST_TRANSPORT", 00:23:34.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.505 "adrfam": "ipv4", 00:23:34.505 "trsvcid": "$NVMF_PORT", 00:23:34.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.505 "hdgst": ${hdgst:-false}, 00:23:34.505 "ddgst": ${ddgst:-false} 00:23:34.505 }, 00:23:34.505 "method": "bdev_nvme_attach_controller" 00:23:34.505 } 00:23:34.505 EOF 00:23:34.505 )") 00:23:34.505 20:15:16 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.505 20:15:16 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:34.505 20:15:16 -- target/dif.sh@54 -- # local file 00:23:34.505 20:15:16 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:34.505 20:15:16 -- target/dif.sh@56 -- # cat 00:23:34.505 20:15:16 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:34.505 20:15:16 -- nvmf/common.sh@543 -- # cat 00:23:34.505 20:15:16 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.505 20:15:16 -- common/autotest_common.sh@1327 -- # shift 00:23:34.505 20:15:16 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:34.505 20:15:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.505 20:15:16 -- nvmf/common.sh@545 -- # jq . 00:23:34.505 20:15:16 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.505 20:15:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:34.505 20:15:16 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:34.505 20:15:16 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:34.505 20:15:16 -- nvmf/common.sh@546 -- # IFS=, 00:23:34.505 20:15:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:34.505 "params": { 00:23:34.505 "name": "Nvme0", 00:23:34.505 "trtype": "tcp", 00:23:34.505 "traddr": "10.0.0.2", 00:23:34.505 "adrfam": "ipv4", 00:23:34.505 "trsvcid": "4420", 00:23:34.505 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:34.505 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:34.505 "hdgst": false, 00:23:34.505 "ddgst": false 00:23:34.505 }, 00:23:34.505 "method": "bdev_nvme_attach_controller" 00:23:34.505 }' 00:23:34.505 20:15:16 -- target/dif.sh@72 -- # (( file <= files )) 00:23:34.505 20:15:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:34.505 20:15:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:34.505 20:15:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.505 20:15:16 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.505 20:15:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:34.505 20:15:16 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:34.505 20:15:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:34.505 20:15:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:34.505 20:15:16 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:34.505 20:15:16 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:34.505 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:34.505 ... 00:23:34.505 fio-3.35 00:23:34.506 Starting 3 threads 00:23:39.781 00:23:39.781 filename0: (groupid=0, jobs=1): err= 0: pid=79489: Wed Apr 24 20:15:22 2024 00:23:39.781 read: IOPS=283, BW=35.4MiB/s (37.1MB/s)(177MiB/5003msec) 00:23:39.781 slat (nsec): min=6161, max=46277, avg=16412.58, stdev=5043.80 00:23:39.781 clat (usec): min=7036, max=13868, avg=10561.81, stdev=639.31 00:23:39.781 lat (usec): min=7053, max=13895, avg=10578.22, stdev=640.29 00:23:39.781 clat percentiles (usec): 00:23:39.781 | 1.00th=[ 9372], 5.00th=[ 9634], 10.00th=[ 9765], 20.00th=[10028], 00:23:39.781 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:23:39.781 | 70.00th=[10945], 80.00th=[10945], 90.00th=[11338], 95.00th=[11469], 00:23:39.781 | 99.00th=[12387], 99.50th=[13566], 99.90th=[13829], 99.95th=[13829], 00:23:39.781 | 99.99th=[13829] 00:23:39.781 bw ( KiB/s): min=34560, max=37632, per=33.56%, avg=36445.44, stdev=1162.70, samples=9 00:23:39.781 iops : min= 270, max= 294, avg=284.67, stdev= 9.06, samples=9 00:23:39.781 lat (msec) : 10=17.30%, 20=82.70% 00:23:39.781 cpu : usr=93.86%, sys=5.68%, ctx=7, majf=0, minf=0 00:23:39.781 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:39.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.781 issued rwts: total=1416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.781 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:39.781 filename0: (groupid=0, jobs=1): err= 0: pid=79490: Wed Apr 24 20:15:22 2024 00:23:39.781 read: IOPS=282, BW=35.4MiB/s (37.1MB/s)(177MiB/5004msec) 00:23:39.781 slat (nsec): min=5971, max=44022, avg=16165.84, stdev=4803.81 00:23:39.781 clat (usec): min=7729, max=13863, avg=10565.27, stdev=619.46 00:23:39.781 lat (usec): min=7746, max=13896, avg=10581.44, stdev=620.27 00:23:39.781 clat percentiles (usec): 00:23:39.781 | 1.00th=[ 9372], 5.00th=[ 9634], 10.00th=[ 9765], 20.00th=[10028], 00:23:39.781 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:23:39.781 | 70.00th=[10945], 80.00th=[10945], 90.00th=[11338], 95.00th=[11469], 00:23:39.781 | 99.00th=[12387], 99.50th=[13566], 99.90th=[13829], 99.95th=[13829], 00:23:39.781 | 99.99th=[13829] 00:23:39.781 bw ( KiB/s): min=34560, max=37632, per=33.56%, avg=36437.33, stdev=1159.09, samples=9 00:23:39.781 iops : min= 270, max= 294, avg=284.67, stdev= 9.06, samples=9 00:23:39.781 lat (msec) : 10=17.80%, 20=82.20% 00:23:39.781 cpu : usr=94.32%, sys=5.24%, ctx=5, majf=0, minf=9 00:23:39.781 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:39.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.781 issued rwts: total=1416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.781 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:39.781 filename0: (groupid=0, jobs=1): err= 0: pid=79491: Wed Apr 24 20:15:22 2024 00:23:39.781 read: IOPS=282, BW=35.3MiB/s (37.0MB/s)(177MiB/5002msec) 00:23:39.781 slat (nsec): min=3661, max=53192, avg=15656.77, stdev=5702.15 00:23:39.781 clat (usec): min=9146, max=16307, avg=10582.60, stdev=660.66 00:23:39.781 lat (usec): min=9159, max=16331, avg=10598.25, stdev=661.57 00:23:39.781 clat percentiles (usec): 00:23:39.781 | 1.00th=[ 9503], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10028], 00:23:39.781 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:23:39.781 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11469], 00:23:39.781 | 99.00th=[12518], 99.50th=[13829], 99.90th=[16319], 99.95th=[16319], 00:23:39.781 | 99.99th=[16319] 00:23:39.781 bw ( KiB/s): min=34491, max=37632, per=33.54%, avg=36421.56, stdev=1043.15, samples=9 00:23:39.781 iops : min= 269, max= 294, avg=284.44, stdev= 8.28, samples=9 00:23:39.781 lat (msec) : 10=17.20%, 20=82.80% 00:23:39.781 cpu : usr=93.44%, sys=6.08%, ctx=5, majf=0, minf=0 00:23:39.781 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:39.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.781 issued rwts: total=1413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.781 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:39.781 00:23:39.781 Run status group 0 (all jobs): 00:23:39.781 READ: bw=106MiB/s (111MB/s), 35.3MiB/s-35.4MiB/s (37.0MB/s-37.1MB/s), io=531MiB (556MB), run=5002-5004msec 00:23:40.041 20:15:22 -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:40.041 20:15:22 -- target/dif.sh@43 -- # local sub 00:23:40.041 20:15:22 -- target/dif.sh@45 -- # for sub in "$@" 00:23:40.041 20:15:22 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:40.041 20:15:22 -- target/dif.sh@36 -- # local sub_id=0 00:23:40.041 20:15:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:40.041 20:15:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.041 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:40.041 20:15:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.041 20:15:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:40.041 20:15:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.041 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:40.041 20:15:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.041 20:15:22 -- target/dif.sh@109 -- # NULL_DIF=2 00:23:40.041 20:15:22 -- target/dif.sh@109 -- # bs=4k 00:23:40.041 20:15:22 -- target/dif.sh@109 -- # numjobs=8 00:23:40.041 20:15:22 -- target/dif.sh@109 -- # iodepth=16 00:23:40.041 20:15:22 -- target/dif.sh@109 -- # runtime= 00:23:40.041 20:15:22 -- target/dif.sh@109 -- # files=2 00:23:40.041 20:15:22 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:40.041 20:15:22 -- target/dif.sh@28 -- # local sub 00:23:40.041 20:15:22 -- target/dif.sh@30 -- # for sub in "$@" 00:23:40.041 20:15:22 -- target/dif.sh@31 -- # create_subsystem 0 00:23:40.041 20:15:22 -- target/dif.sh@18 -- # local sub_id=0 00:23:40.041 20:15:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:40.041 20:15:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.041 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:40.041 bdev_null0 00:23:40.041 20:15:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.041 20:15:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:40.041 20:15:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.041 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:40.041 20:15:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.041 20:15:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:40.041 20:15:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.041 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:40.041 20:15:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.041 20:15:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:40.041 20:15:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.041 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:40.041 [2024-04-24 20:15:22.280833] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.041 20:15:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.041 20:15:22 -- target/dif.sh@30 -- # for sub in "$@" 00:23:40.041 20:15:22 -- target/dif.sh@31 -- # create_subsystem 1 00:23:40.041 20:15:22 -- target/dif.sh@18 -- # local sub_id=1 00:23:40.041 20:15:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:40.041 20:15:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.041 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:40.300 bdev_null1 00:23:40.300 20:15:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.300 20:15:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:40.300 20:15:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.300 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:40.300 20:15:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.300 20:15:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:40.300 20:15:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.300 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:40.300 20:15:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.300 20:15:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.300 20:15:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.300 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:40.300 20:15:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.300 20:15:22 -- target/dif.sh@30 -- # for sub in "$@" 00:23:40.300 20:15:22 -- target/dif.sh@31 -- # create_subsystem 2 00:23:40.300 20:15:22 -- target/dif.sh@18 -- # local sub_id=2 00:23:40.300 20:15:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:40.300 20:15:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.300 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:40.300 bdev_null2 00:23:40.300 20:15:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.300 20:15:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:40.300 20:15:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.300 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:40.300 20:15:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.300 20:15:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:40.300 20:15:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.300 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:40.300 20:15:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.300 20:15:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:40.300 20:15:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:40.300 20:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:40.301 20:15:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:40.301 20:15:22 -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:40.301 20:15:22 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:40.301 20:15:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:40.301 20:15:22 -- nvmf/common.sh@521 -- # config=() 00:23:40.301 20:15:22 -- nvmf/common.sh@521 -- # local subsystem config 00:23:40.301 20:15:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:40.301 20:15:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:40.301 20:15:22 -- target/dif.sh@82 -- # gen_fio_conf 00:23:40.301 20:15:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:40.301 { 00:23:40.301 "params": { 00:23:40.301 "name": "Nvme$subsystem", 00:23:40.301 "trtype": "$TEST_TRANSPORT", 00:23:40.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.301 "adrfam": "ipv4", 00:23:40.301 "trsvcid": "$NVMF_PORT", 00:23:40.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.301 "hdgst": ${hdgst:-false}, 00:23:40.301 "ddgst": ${ddgst:-false} 00:23:40.301 }, 00:23:40.301 "method": "bdev_nvme_attach_controller" 00:23:40.301 } 00:23:40.301 EOF 00:23:40.301 )") 00:23:40.301 20:15:22 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:40.301 20:15:22 -- target/dif.sh@54 -- # local file 00:23:40.301 20:15:22 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:40.301 20:15:22 -- target/dif.sh@56 -- # cat 00:23:40.301 20:15:22 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:40.301 20:15:22 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:40.301 20:15:22 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:40.301 20:15:22 -- common/autotest_common.sh@1327 -- # shift 00:23:40.301 20:15:22 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:40.301 20:15:22 -- nvmf/common.sh@543 -- # cat 00:23:40.301 20:15:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:40.301 20:15:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:40.301 20:15:22 -- target/dif.sh@72 -- # (( file <= files )) 00:23:40.301 20:15:22 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:40.301 20:15:22 -- target/dif.sh@73 -- # cat 00:23:40.301 20:15:22 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:40.301 20:15:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:40.301 20:15:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:40.301 20:15:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:40.301 { 00:23:40.301 "params": { 00:23:40.301 "name": "Nvme$subsystem", 00:23:40.301 "trtype": "$TEST_TRANSPORT", 00:23:40.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.301 "adrfam": "ipv4", 00:23:40.301 "trsvcid": "$NVMF_PORT", 00:23:40.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.301 "hdgst": ${hdgst:-false}, 00:23:40.301 "ddgst": ${ddgst:-false} 00:23:40.301 }, 00:23:40.301 "method": "bdev_nvme_attach_controller" 00:23:40.301 } 00:23:40.301 EOF 00:23:40.301 )") 00:23:40.301 20:15:22 -- target/dif.sh@72 -- # (( file++ )) 00:23:40.301 20:15:22 -- target/dif.sh@72 -- # (( file <= files )) 00:23:40.301 20:15:22 -- target/dif.sh@73 -- # cat 00:23:40.301 20:15:22 -- nvmf/common.sh@543 -- # cat 00:23:40.301 20:15:22 -- target/dif.sh@72 -- # (( file++ )) 00:23:40.301 20:15:22 -- target/dif.sh@72 -- # (( file <= files )) 00:23:40.301 20:15:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:40.301 20:15:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:40.301 { 00:23:40.301 "params": { 00:23:40.301 "name": "Nvme$subsystem", 00:23:40.301 "trtype": "$TEST_TRANSPORT", 00:23:40.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.301 "adrfam": "ipv4", 00:23:40.301 "trsvcid": "$NVMF_PORT", 00:23:40.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.301 "hdgst": ${hdgst:-false}, 00:23:40.301 "ddgst": ${ddgst:-false} 00:23:40.301 }, 00:23:40.301 "method": "bdev_nvme_attach_controller" 00:23:40.301 } 00:23:40.301 EOF 00:23:40.301 )") 00:23:40.301 20:15:22 -- nvmf/common.sh@543 -- # cat 00:23:40.301 20:15:22 -- nvmf/common.sh@545 -- # jq . 00:23:40.301 20:15:22 -- nvmf/common.sh@546 -- # IFS=, 00:23:40.301 20:15:22 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:40.301 "params": { 00:23:40.301 "name": "Nvme0", 00:23:40.301 "trtype": "tcp", 00:23:40.301 "traddr": "10.0.0.2", 00:23:40.301 "adrfam": "ipv4", 00:23:40.301 "trsvcid": "4420", 00:23:40.301 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.301 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:40.301 "hdgst": false, 00:23:40.301 "ddgst": false 00:23:40.301 }, 00:23:40.301 "method": "bdev_nvme_attach_controller" 00:23:40.301 },{ 00:23:40.301 "params": { 00:23:40.301 "name": "Nvme1", 00:23:40.301 "trtype": "tcp", 00:23:40.301 "traddr": "10.0.0.2", 00:23:40.301 "adrfam": "ipv4", 00:23:40.301 "trsvcid": "4420", 00:23:40.301 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.301 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.301 "hdgst": false, 00:23:40.301 "ddgst": false 00:23:40.301 }, 00:23:40.301 "method": "bdev_nvme_attach_controller" 00:23:40.301 },{ 00:23:40.301 "params": { 00:23:40.301 "name": "Nvme2", 00:23:40.301 "trtype": "tcp", 00:23:40.301 "traddr": "10.0.0.2", 00:23:40.301 "adrfam": "ipv4", 00:23:40.301 "trsvcid": "4420", 00:23:40.301 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:40.301 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:40.301 "hdgst": false, 00:23:40.301 "ddgst": false 00:23:40.301 }, 00:23:40.301 "method": "bdev_nvme_attach_controller" 00:23:40.301 }' 00:23:40.301 20:15:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:40.301 20:15:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:40.301 20:15:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:40.301 20:15:22 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:40.301 20:15:22 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:40.301 20:15:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:40.301 20:15:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:40.301 20:15:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:40.301 20:15:22 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:40.301 20:15:22 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:40.561 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:40.561 ... 00:23:40.561 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:40.561 ... 00:23:40.561 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:40.561 ... 00:23:40.561 fio-3.35 00:23:40.561 Starting 24 threads 00:23:52.827 00:23:52.827 filename0: (groupid=0, jobs=1): err= 0: pid=79590: Wed Apr 24 20:15:33 2024 00:23:52.827 read: IOPS=216, BW=866KiB/s (887kB/s)(8672KiB/10015msec) 00:23:52.827 slat (usec): min=3, max=14021, avg=35.70, stdev=413.85 00:23:52.827 clat (msec): min=14, max=143, avg=73.77, stdev=22.94 00:23:52.827 lat (msec): min=14, max=143, avg=73.80, stdev=22.93 00:23:52.827 clat percentiles (msec): 00:23:52.827 | 1.00th=[ 35], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 51], 00:23:52.827 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 77], 00:23:52.827 | 70.00th=[ 89], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 115], 00:23:52.827 | 99.00th=[ 127], 99.50th=[ 130], 99.90th=[ 140], 99.95th=[ 144], 00:23:52.827 | 99.99th=[ 144] 00:23:52.827 bw ( KiB/s): min= 576, max= 1072, per=4.25%, avg=863.16, stdev=160.57, samples=19 00:23:52.827 iops : min= 144, max= 268, avg=215.79, stdev=40.14, samples=19 00:23:52.827 lat (msec) : 20=0.14%, 50=19.23%, 100=65.18%, 250=15.45% 00:23:52.827 cpu : usr=39.10%, sys=1.41%, ctx=1080, majf=0, minf=9 00:23:52.827 IO depths : 1=0.1%, 2=1.2%, 4=4.8%, 8=78.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:23:52.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.827 complete : 0=0.0%, 4=88.4%, 8=10.5%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.827 issued rwts: total=2168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.827 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.827 filename0: (groupid=0, jobs=1): err= 0: pid=79591: Wed Apr 24 20:15:33 2024 00:23:52.827 read: IOPS=192, BW=768KiB/s (787kB/s)(7732KiB/10064msec) 00:23:52.827 slat (usec): min=3, max=8030, avg=30.25, stdev=326.10 00:23:52.827 clat (usec): min=1675, max=162925, avg=82956.37, stdev=30925.81 00:23:52.827 lat (usec): min=1683, max=162941, avg=82986.63, stdev=30925.07 00:23:52.827 clat percentiles (msec): 00:23:52.827 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 51], 20.00th=[ 64], 00:23:52.827 | 30.00th=[ 71], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 92], 00:23:52.827 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 118], 95.00th=[ 132], 00:23:52.827 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 163], 99.95th=[ 163], 00:23:52.827 | 99.99th=[ 163] 00:23:52.827 bw ( KiB/s): min= 507, max= 1744, per=3.77%, avg=766.55, stdev=261.96, samples=20 00:23:52.827 iops : min= 126, max= 436, avg=191.60, stdev=65.53, samples=20 00:23:52.827 lat (msec) : 2=0.83%, 4=2.48%, 10=1.66%, 20=1.35%, 50=3.52% 00:23:52.827 lat (msec) : 100=64.67%, 250=25.50% 00:23:52.827 cpu : usr=35.72%, sys=1.58%, ctx=1148, majf=0, minf=0 00:23:52.827 IO depths : 1=0.3%, 2=4.7%, 4=19.2%, 8=62.1%, 16=13.7%, 32=0.0%, >=64=0.0% 00:23:52.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.827 complete : 0=0.0%, 4=93.0%, 8=2.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.827 issued rwts: total=1933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.827 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.827 filename0: (groupid=0, jobs=1): err= 0: pid=79592: Wed Apr 24 20:15:33 2024 00:23:52.827 read: IOPS=205, BW=823KiB/s (843kB/s)(8264KiB/10043msec) 00:23:52.827 slat (usec): min=3, max=8022, avg=28.44, stdev=258.94 00:23:52.827 clat (msec): min=33, max=158, avg=77.53, stdev=23.00 00:23:52.827 lat (msec): min=33, max=158, avg=77.55, stdev=23.00 00:23:52.827 clat percentiles (msec): 00:23:52.827 | 1.00th=[ 40], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 56], 00:23:52.827 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 85], 00:23:52.827 | 70.00th=[ 94], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 117], 00:23:52.827 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 153], 99.95th=[ 159], 00:23:52.827 | 99.99th=[ 159] 00:23:52.827 bw ( KiB/s): min= 528, max= 1072, per=4.03%, avg=819.80, stdev=169.14, samples=20 00:23:52.827 iops : min= 132, max= 268, avg=204.90, stdev=42.33, samples=20 00:23:52.827 lat (msec) : 50=14.86%, 100=68.05%, 250=17.09% 00:23:52.827 cpu : usr=44.52%, sys=1.68%, ctx=1321, majf=0, minf=9 00:23:52.827 IO depths : 1=0.1%, 2=3.0%, 4=12.0%, 8=70.7%, 16=14.3%, 32=0.0%, >=64=0.0% 00:23:52.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.827 complete : 0=0.0%, 4=90.4%, 8=7.0%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.827 issued rwts: total=2066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.827 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.827 filename0: (groupid=0, jobs=1): err= 0: pid=79593: Wed Apr 24 20:15:33 2024 00:23:52.827 read: IOPS=222, BW=890KiB/s (912kB/s)(8952KiB/10055msec) 00:23:52.828 slat (usec): min=7, max=9018, avg=27.61, stdev=268.25 00:23:52.828 clat (msec): min=6, max=152, avg=71.67, stdev=23.08 00:23:52.828 lat (msec): min=6, max=152, avg=71.69, stdev=23.09 00:23:52.828 clat percentiles (msec): 00:23:52.828 | 1.00th=[ 14], 5.00th=[ 39], 10.00th=[ 45], 20.00th=[ 51], 00:23:52.828 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 75], 00:23:52.828 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 103], 95.00th=[ 108], 00:23:52.828 | 99.00th=[ 130], 99.50th=[ 131], 99.90th=[ 132], 99.95th=[ 153], 00:23:52.828 | 99.99th=[ 153] 00:23:52.828 bw ( KiB/s): min= 672, max= 1312, per=4.37%, avg=887.85, stdev=167.42, samples=20 00:23:52.828 iops : min= 168, max= 328, avg=221.90, stdev=41.89, samples=20 00:23:52.828 lat (msec) : 10=0.71%, 20=1.34%, 50=17.29%, 100=68.01%, 250=12.65% 00:23:52.828 cpu : usr=44.15%, sys=1.80%, ctx=1202, majf=0, minf=9 00:23:52.828 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.5%, 16=16.5%, 32=0.0%, >=64=0.0% 00:23:52.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.828 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.828 issued rwts: total=2238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.828 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.828 filename0: (groupid=0, jobs=1): err= 0: pid=79594: Wed Apr 24 20:15:33 2024 00:23:52.828 read: IOPS=225, BW=900KiB/s (922kB/s)(9016KiB/10014msec) 00:23:52.828 slat (usec): min=7, max=11037, avg=25.90, stdev=287.68 00:23:52.828 clat (msec): min=14, max=143, avg=70.97, stdev=22.23 00:23:52.828 lat (msec): min=14, max=143, avg=71.00, stdev=22.23 00:23:52.828 clat percentiles (msec): 00:23:52.828 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 48], 00:23:52.828 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 72], 00:23:52.828 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 105], 95.00th=[ 108], 00:23:52.828 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 138], 99.95th=[ 144], 00:23:52.828 | 99.99th=[ 144] 00:23:52.828 bw ( KiB/s): min= 641, max= 1072, per=4.43%, avg=899.42, stdev=139.06, samples=19 00:23:52.828 iops : min= 160, max= 268, avg=224.84, stdev=34.79, samples=19 00:23:52.828 lat (msec) : 20=0.40%, 50=22.76%, 100=64.20%, 250=12.64% 00:23:52.828 cpu : usr=31.70%, sys=1.39%, ctx=905, majf=0, minf=9 00:23:52.828 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:23:52.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.828 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.828 issued rwts: total=2254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.828 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.828 filename0: (groupid=0, jobs=1): err= 0: pid=79595: Wed Apr 24 20:15:33 2024 00:23:52.828 read: IOPS=224, BW=898KiB/s (920kB/s)(9004KiB/10023msec) 00:23:52.828 slat (usec): min=3, max=8034, avg=51.02, stdev=526.30 00:23:52.828 clat (msec): min=23, max=152, avg=70.98, stdev=22.21 00:23:52.828 lat (msec): min=23, max=152, avg=71.03, stdev=22.20 00:23:52.828 clat percentiles (msec): 00:23:52.828 | 1.00th=[ 35], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 48], 00:23:52.828 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:23:52.828 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 102], 95.00th=[ 108], 00:23:52.828 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 153], 00:23:52.828 | 99.99th=[ 153] 00:23:52.828 bw ( KiB/s): min= 633, max= 1056, per=4.40%, avg=893.95, stdev=134.65, samples=20 00:23:52.828 iops : min= 158, max= 264, avg=223.45, stdev=33.70, samples=20 00:23:52.828 lat (msec) : 50=22.75%, 100=66.77%, 250=10.48% 00:23:52.828 cpu : usr=31.82%, sys=1.39%, ctx=892, majf=0, minf=9 00:23:52.828 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.0%, 16=15.9%, 32=0.0%, >=64=0.0% 00:23:52.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.828 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.828 issued rwts: total=2251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.828 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.828 filename0: (groupid=0, jobs=1): err= 0: pid=79596: Wed Apr 24 20:15:33 2024 00:23:52.828 read: IOPS=210, BW=842KiB/s (862kB/s)(8452KiB/10035msec) 00:23:52.828 slat (nsec): min=3928, max=50465, avg=15389.85, stdev=6259.13 00:23:52.828 clat (msec): min=30, max=155, avg=75.85, stdev=22.47 00:23:52.828 lat (msec): min=30, max=155, avg=75.86, stdev=22.47 00:23:52.828 clat percentiles (msec): 00:23:52.828 | 1.00th=[ 37], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:23:52.828 | 30.00th=[ 64], 40.00th=[ 68], 50.00th=[ 71], 60.00th=[ 79], 00:23:52.828 | 70.00th=[ 91], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 118], 00:23:52.828 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 153], 99.95th=[ 155], 00:23:52.828 | 99.99th=[ 155] 00:23:52.828 bw ( KiB/s): min= 528, max= 1024, per=4.13%, avg=838.75, stdev=157.62, samples=20 00:23:52.828 iops : min= 132, max= 256, avg=209.65, stdev=39.43, samples=20 00:23:52.828 lat (msec) : 50=14.34%, 100=69.90%, 250=15.76% 00:23:52.828 cpu : usr=40.79%, sys=1.54%, ctx=1358, majf=0, minf=9 00:23:52.828 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=78.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:23:52.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.828 complete : 0=0.0%, 4=88.7%, 8=10.3%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.828 issued rwts: total=2113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.828 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.828 filename0: (groupid=0, jobs=1): err= 0: pid=79597: Wed Apr 24 20:15:33 2024 00:23:52.828 read: IOPS=220, BW=883KiB/s (905kB/s)(8872KiB/10044msec) 00:23:52.828 slat (usec): min=4, max=4035, avg=19.34, stdev=140.13 00:23:52.828 clat (msec): min=9, max=132, avg=72.28, stdev=21.87 00:23:52.828 lat (msec): min=9, max=132, avg=72.30, stdev=21.87 00:23:52.828 clat percentiles (msec): 00:23:52.828 | 1.00th=[ 12], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 53], 00:23:52.828 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 75], 00:23:52.828 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 104], 95.00th=[ 108], 00:23:52.828 | 99.00th=[ 122], 99.50th=[ 129], 99.90th=[ 132], 99.95th=[ 132], 00:23:52.828 | 99.99th=[ 132] 00:23:52.828 bw ( KiB/s): min= 712, max= 1138, per=4.34%, avg=882.05, stdev=134.69, samples=20 00:23:52.828 iops : min= 178, max= 284, avg=220.40, stdev=33.72, samples=20 00:23:52.828 lat (msec) : 10=0.72%, 20=0.72%, 50=15.01%, 100=70.87%, 250=12.67% 00:23:52.828 cpu : usr=41.68%, sys=1.65%, ctx=1414, majf=0, minf=9 00:23:52.828 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=80.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:23:52.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.828 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.828 issued rwts: total=2218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.828 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.828 filename1: (groupid=0, jobs=1): err= 0: pid=79598: Wed Apr 24 20:15:33 2024 00:23:52.828 read: IOPS=207, BW=829KiB/s (849kB/s)(8328KiB/10045msec) 00:23:52.828 slat (usec): min=4, max=10040, avg=41.79, stdev=420.97 00:23:52.828 clat (msec): min=6, max=157, avg=76.87, stdev=24.43 00:23:52.828 lat (msec): min=6, max=157, avg=76.91, stdev=24.44 00:23:52.828 clat percentiles (msec): 00:23:52.828 | 1.00th=[ 11], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 59], 00:23:52.828 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 83], 00:23:52.828 | 70.00th=[ 91], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 120], 00:23:52.828 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 157], 99.95th=[ 157], 00:23:52.828 | 99.99th=[ 157] 00:23:52.828 bw ( KiB/s): min= 523, max= 1026, per=4.08%, avg=828.20, stdev=153.05, samples=20 00:23:52.828 iops : min= 130, max= 256, avg=206.95, stdev=38.32, samples=20 00:23:52.828 lat (msec) : 10=0.77%, 20=0.77%, 50=12.73%, 100=68.30%, 250=17.44% 00:23:52.828 cpu : usr=37.34%, sys=1.54%, ctx=1203, majf=0, minf=9 00:23:52.828 IO depths : 1=0.1%, 2=1.6%, 4=6.4%, 8=75.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:23:52.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.828 complete : 0=0.0%, 4=89.5%, 8=9.1%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.828 issued rwts: total=2082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.828 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.828 filename1: (groupid=0, jobs=1): err= 0: pid=79599: Wed Apr 24 20:15:33 2024 00:23:52.828 read: IOPS=224, BW=899KiB/s (920kB/s)(9000KiB/10015msec) 00:23:52.828 slat (usec): min=3, max=7046, avg=28.48, stdev=260.72 00:23:52.828 clat (msec): min=14, max=144, avg=71.11, stdev=21.87 00:23:52.829 lat (msec): min=14, max=144, avg=71.14, stdev=21.88 00:23:52.829 clat percentiles (msec): 00:23:52.829 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 49], 00:23:52.829 | 30.00th=[ 58], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 73], 00:23:52.829 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 103], 95.00th=[ 108], 00:23:52.829 | 99.00th=[ 131], 99.50th=[ 133], 99.90th=[ 138], 99.95th=[ 146], 00:23:52.829 | 99.99th=[ 146] 00:23:52.829 bw ( KiB/s): min= 648, max= 1072, per=4.41%, avg=896.42, stdev=132.06, samples=19 00:23:52.829 iops : min= 162, max= 268, avg=224.11, stdev=33.01, samples=19 00:23:52.829 lat (msec) : 20=0.27%, 50=21.16%, 100=67.42%, 250=11.16% 00:23:52.829 cpu : usr=42.35%, sys=1.73%, ctx=1436, majf=0, minf=9 00:23:52.829 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:23:52.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.829 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.829 issued rwts: total=2250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.829 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.829 filename1: (groupid=0, jobs=1): err= 0: pid=79600: Wed Apr 24 20:15:33 2024 00:23:52.829 read: IOPS=200, BW=804KiB/s (823kB/s)(8068KiB/10037msec) 00:23:52.829 slat (usec): min=6, max=7045, avg=25.94, stdev=226.76 00:23:52.829 clat (msec): min=30, max=154, avg=79.38, stdev=22.47 00:23:52.829 lat (msec): min=30, max=154, avg=79.40, stdev=22.46 00:23:52.829 clat percentiles (msec): 00:23:52.829 | 1.00th=[ 37], 5.00th=[ 45], 10.00th=[ 49], 20.00th=[ 62], 00:23:52.829 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 86], 00:23:52.829 | 70.00th=[ 95], 80.00th=[ 102], 90.00th=[ 108], 95.00th=[ 114], 00:23:52.829 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 150], 99.95th=[ 155], 00:23:52.829 | 99.99th=[ 155] 00:23:52.829 bw ( KiB/s): min= 528, max= 1048, per=3.94%, avg=800.30, stdev=161.99, samples=20 00:23:52.829 iops : min= 132, max= 262, avg=200.05, stdev=40.51, samples=20 00:23:52.829 lat (msec) : 50=10.86%, 100=66.39%, 250=22.76% 00:23:52.829 cpu : usr=41.67%, sys=1.65%, ctx=1391, majf=0, minf=9 00:23:52.829 IO depths : 1=0.1%, 2=3.6%, 4=14.4%, 8=67.8%, 16=14.1%, 32=0.0%, >=64=0.0% 00:23:52.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.829 complete : 0=0.0%, 4=91.3%, 8=5.6%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.829 issued rwts: total=2017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.829 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.829 filename1: (groupid=0, jobs=1): err= 0: pid=79601: Wed Apr 24 20:15:33 2024 00:23:52.829 read: IOPS=192, BW=770KiB/s (788kB/s)(7732KiB/10042msec) 00:23:52.829 slat (usec): min=7, max=8019, avg=27.75, stdev=287.29 00:23:52.829 clat (msec): min=33, max=162, avg=82.84, stdev=22.49 00:23:52.829 lat (msec): min=33, max=162, avg=82.87, stdev=22.49 00:23:52.829 clat percentiles (msec): 00:23:52.829 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 65], 00:23:52.829 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 90], 00:23:52.829 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 111], 95.00th=[ 117], 00:23:52.829 | 99.00th=[ 144], 99.50th=[ 163], 99.90th=[ 163], 99.95th=[ 163], 00:23:52.829 | 99.99th=[ 163] 00:23:52.829 bw ( KiB/s): min= 512, max= 1040, per=3.77%, avg=766.60, stdev=153.06, samples=20 00:23:52.829 iops : min= 128, max= 260, avg=191.60, stdev=38.30, samples=20 00:23:52.829 lat (msec) : 50=7.45%, 100=71.24%, 250=21.31% 00:23:52.829 cpu : usr=39.42%, sys=1.71%, ctx=1167, majf=0, minf=9 00:23:52.829 IO depths : 1=0.1%, 2=4.1%, 4=16.7%, 8=65.3%, 16=13.9%, 32=0.0%, >=64=0.0% 00:23:52.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.829 complete : 0=0.0%, 4=92.0%, 8=4.3%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.829 issued rwts: total=1933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.829 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.829 filename1: (groupid=0, jobs=1): err= 0: pid=79602: Wed Apr 24 20:15:33 2024 00:23:52.829 read: IOPS=210, BW=840KiB/s (861kB/s)(8416KiB/10014msec) 00:23:52.829 slat (usec): min=4, max=4032, avg=21.63, stdev=151.42 00:23:52.829 clat (msec): min=20, max=155, avg=76.01, stdev=23.34 00:23:52.829 lat (msec): min=20, max=155, avg=76.03, stdev=23.34 00:23:52.829 clat percentiles (msec): 00:23:52.829 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 55], 00:23:52.829 | 30.00th=[ 64], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 80], 00:23:52.829 | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 105], 95.00th=[ 117], 00:23:52.829 | 99.00th=[ 130], 99.50th=[ 131], 99.90th=[ 155], 99.95th=[ 157], 00:23:52.829 | 99.99th=[ 157] 00:23:52.829 bw ( KiB/s): min= 528, max= 1072, per=4.11%, avg=835.84, stdev=179.09, samples=19 00:23:52.829 iops : min= 132, max= 268, avg=208.95, stdev=44.78, samples=19 00:23:52.829 lat (msec) : 50=17.30%, 100=65.35%, 250=17.35% 00:23:52.829 cpu : usr=47.88%, sys=2.10%, ctx=1257, majf=0, minf=9 00:23:52.829 IO depths : 1=0.1%, 2=2.8%, 4=11.0%, 8=71.9%, 16=14.3%, 32=0.0%, >=64=0.0% 00:23:52.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.829 complete : 0=0.0%, 4=90.0%, 8=7.6%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.829 issued rwts: total=2104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.829 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.829 filename1: (groupid=0, jobs=1): err= 0: pid=79603: Wed Apr 24 20:15:33 2024 00:23:52.829 read: IOPS=213, BW=854KiB/s (875kB/s)(8552KiB/10010msec) 00:23:52.829 slat (usec): min=3, max=8033, avg=38.41, stdev=415.65 00:23:52.829 clat (msec): min=14, max=151, avg=74.71, stdev=23.48 00:23:52.829 lat (msec): min=14, max=151, avg=74.75, stdev=23.49 00:23:52.829 clat percentiles (msec): 00:23:52.829 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 51], 00:23:52.829 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 79], 00:23:52.829 | 70.00th=[ 92], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 116], 00:23:52.829 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 148], 99.95th=[ 153], 00:23:52.829 | 99.99th=[ 153] 00:23:52.829 bw ( KiB/s): min= 528, max= 1048, per=4.18%, avg=848.84, stdev=179.40, samples=19 00:23:52.829 iops : min= 132, max= 262, avg=212.21, stdev=44.85, samples=19 00:23:52.829 lat (msec) : 20=0.19%, 50=19.83%, 100=65.81%, 250=14.17% 00:23:52.829 cpu : usr=31.85%, sys=1.39%, ctx=907, majf=0, minf=9 00:23:52.829 IO depths : 1=0.1%, 2=2.2%, 4=8.6%, 8=74.4%, 16=14.8%, 32=0.0%, >=64=0.0% 00:23:52.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.829 complete : 0=0.0%, 4=89.4%, 8=8.7%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.829 issued rwts: total=2138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.829 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.829 filename1: (groupid=0, jobs=1): err= 0: pid=79604: Wed Apr 24 20:15:33 2024 00:23:52.829 read: IOPS=213, BW=854KiB/s (874kB/s)(8556KiB/10023msec) 00:23:52.829 slat (usec): min=4, max=8012, avg=22.59, stdev=211.93 00:23:52.829 clat (msec): min=22, max=151, avg=74.80, stdev=24.39 00:23:52.829 lat (msec): min=22, max=151, avg=74.82, stdev=24.38 00:23:52.829 clat percentiles (msec): 00:23:52.829 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 45], 20.00th=[ 51], 00:23:52.829 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 70], 60.00th=[ 77], 00:23:52.829 | 70.00th=[ 89], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 115], 00:23:52.829 | 99.00th=[ 134], 99.50th=[ 148], 99.90th=[ 150], 99.95th=[ 153], 00:23:52.829 | 99.99th=[ 153] 00:23:52.829 bw ( KiB/s): min= 512, max= 1072, per=4.19%, avg=851.10, stdev=196.38, samples=20 00:23:52.829 iops : min= 128, max= 268, avg=212.75, stdev=49.10, samples=20 00:23:52.829 lat (msec) : 50=20.06%, 100=60.92%, 250=19.03% 00:23:52.829 cpu : usr=42.58%, sys=1.69%, ctx=1669, majf=0, minf=9 00:23:52.829 IO depths : 1=0.1%, 2=1.9%, 4=7.6%, 8=75.5%, 16=14.9%, 32=0.0%, >=64=0.0% 00:23:52.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.829 complete : 0=0.0%, 4=89.1%, 8=9.3%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.829 issued rwts: total=2139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.829 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.829 filename1: (groupid=0, jobs=1): err= 0: pid=79605: Wed Apr 24 20:15:33 2024 00:23:52.829 read: IOPS=218, BW=874KiB/s (895kB/s)(8752KiB/10019msec) 00:23:52.829 slat (usec): min=3, max=8036, avg=34.15, stdev=382.57 00:23:52.829 clat (msec): min=21, max=133, avg=73.07, stdev=20.96 00:23:52.829 lat (msec): min=21, max=133, avg=73.10, stdev=20.96 00:23:52.829 clat percentiles (msec): 00:23:52.830 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 53], 00:23:52.830 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 75], 00:23:52.830 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 104], 95.00th=[ 108], 00:23:52.830 | 99.00th=[ 122], 99.50th=[ 130], 99.90th=[ 132], 99.95th=[ 134], 00:23:52.830 | 99.99th=[ 134] 00:23:52.830 bw ( KiB/s): min= 672, max= 1024, per=4.29%, avg=872.42, stdev=120.07, samples=19 00:23:52.830 iops : min= 168, max= 256, avg=218.11, stdev=30.02, samples=19 00:23:52.830 lat (msec) : 50=18.01%, 100=69.52%, 250=12.48% 00:23:52.830 cpu : usr=31.98%, sys=1.12%, ctx=891, majf=0, minf=9 00:23:52.830 IO depths : 1=0.1%, 2=0.2%, 4=1.0%, 8=82.4%, 16=16.4%, 32=0.0%, >=64=0.0% 00:23:52.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.830 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.830 issued rwts: total=2188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.830 filename2: (groupid=0, jobs=1): err= 0: pid=79606: Wed Apr 24 20:15:33 2024 00:23:52.830 read: IOPS=208, BW=835KiB/s (855kB/s)(8388KiB/10047msec) 00:23:52.830 slat (usec): min=7, max=8024, avg=22.11, stdev=247.09 00:23:52.830 clat (msec): min=6, max=142, avg=76.42, stdev=22.79 00:23:52.830 lat (msec): min=6, max=142, avg=76.44, stdev=22.79 00:23:52.830 clat percentiles (msec): 00:23:52.830 | 1.00th=[ 11], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 60], 00:23:52.830 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 81], 00:23:52.830 | 70.00th=[ 92], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 110], 00:23:52.830 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:23:52.830 | 99.99th=[ 144] 00:23:52.830 bw ( KiB/s): min= 624, max= 1024, per=4.10%, avg=833.95, stdev=126.25, samples=20 00:23:52.830 iops : min= 156, max= 256, avg=208.40, stdev=31.67, samples=20 00:23:52.830 lat (msec) : 10=0.10%, 20=1.43%, 50=10.35%, 100=70.43%, 250=17.69% 00:23:52.830 cpu : usr=32.19%, sys=1.25%, ctx=899, majf=0, minf=9 00:23:52.830 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=79.0%, 16=16.8%, 32=0.0%, >=64=0.0% 00:23:52.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.830 complete : 0=0.0%, 4=88.9%, 8=10.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.830 issued rwts: total=2097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.830 filename2: (groupid=0, jobs=1): err= 0: pid=79607: Wed Apr 24 20:15:33 2024 00:23:52.830 read: IOPS=223, BW=894KiB/s (915kB/s)(8964KiB/10027msec) 00:23:52.830 slat (nsec): min=3410, max=41855, avg=16205.01, stdev=5966.85 00:23:52.830 clat (msec): min=31, max=133, avg=71.46, stdev=20.76 00:23:52.830 lat (msec): min=31, max=133, avg=71.48, stdev=20.76 00:23:52.830 clat percentiles (msec): 00:23:52.830 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 50], 00:23:52.830 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 72], 00:23:52.830 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 102], 95.00th=[ 107], 00:23:52.830 | 99.00th=[ 121], 99.50th=[ 131], 99.90th=[ 134], 99.95th=[ 134], 00:23:52.830 | 99.99th=[ 134] 00:23:52.830 bw ( KiB/s): min= 713, max= 1048, per=4.39%, avg=892.35, stdev=108.74, samples=20 00:23:52.830 iops : min= 178, max= 262, avg=223.05, stdev=27.23, samples=20 00:23:52.830 lat (msec) : 50=20.93%, 100=68.23%, 250=10.84% 00:23:52.830 cpu : usr=37.08%, sys=1.54%, ctx=1110, majf=0, minf=9 00:23:52.830 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.2%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:52.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.830 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.830 issued rwts: total=2241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.830 filename2: (groupid=0, jobs=1): err= 0: pid=79608: Wed Apr 24 20:15:33 2024 00:23:52.830 read: IOPS=202, BW=811KiB/s (831kB/s)(8148KiB/10041msec) 00:23:52.830 slat (usec): min=6, max=8048, avg=25.91, stdev=266.62 00:23:52.830 clat (msec): min=26, max=151, avg=78.61, stdev=20.78 00:23:52.830 lat (msec): min=26, max=151, avg=78.64, stdev=20.78 00:23:52.830 clat percentiles (msec): 00:23:52.830 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 54], 20.00th=[ 61], 00:23:52.830 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 84], 00:23:52.830 | 70.00th=[ 94], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 113], 00:23:52.830 | 99.00th=[ 121], 99.50th=[ 131], 99.90th=[ 138], 99.95th=[ 153], 00:23:52.830 | 99.99th=[ 153] 00:23:52.830 bw ( KiB/s): min= 528, max= 976, per=3.98%, avg=808.20, stdev=144.24, samples=20 00:23:52.830 iops : min= 132, max= 244, avg=202.00, stdev=36.10, samples=20 00:23:52.830 lat (msec) : 50=8.93%, 100=73.88%, 250=17.18% 00:23:52.830 cpu : usr=35.78%, sys=1.25%, ctx=1003, majf=0, minf=9 00:23:52.830 IO depths : 1=0.1%, 2=2.7%, 4=10.6%, 8=71.6%, 16=15.2%, 32=0.0%, >=64=0.0% 00:23:52.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.830 complete : 0=0.0%, 4=90.6%, 8=7.1%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.830 issued rwts: total=2037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.830 filename2: (groupid=0, jobs=1): err= 0: pid=79609: Wed Apr 24 20:15:33 2024 00:23:52.830 read: IOPS=208, BW=835KiB/s (855kB/s)(8368KiB/10018msec) 00:23:52.830 slat (usec): min=7, max=8032, avg=27.29, stdev=303.35 00:23:52.830 clat (msec): min=23, max=157, avg=76.40, stdev=23.39 00:23:52.830 lat (msec): min=23, max=157, avg=76.42, stdev=23.40 00:23:52.830 clat percentiles (msec): 00:23:52.830 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 58], 00:23:52.830 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 83], 00:23:52.830 | 70.00th=[ 92], 80.00th=[ 101], 90.00th=[ 107], 95.00th=[ 117], 00:23:52.830 | 99.00th=[ 131], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 157], 00:23:52.830 | 99.99th=[ 157] 00:23:52.830 bw ( KiB/s): min= 528, max= 1048, per=4.10%, avg=833.68, stdev=175.83, samples=19 00:23:52.830 iops : min= 132, max= 262, avg=208.42, stdev=43.96, samples=19 00:23:52.830 lat (msec) : 50=16.40%, 100=64.72%, 250=18.88% 00:23:52.830 cpu : usr=32.04%, sys=1.10%, ctx=916, majf=0, minf=9 00:23:52.830 IO depths : 1=0.1%, 2=1.7%, 4=6.8%, 8=75.9%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:52.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.830 complete : 0=0.0%, 4=89.3%, 8=9.2%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.830 issued rwts: total=2092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.830 filename2: (groupid=0, jobs=1): err= 0: pid=79610: Wed Apr 24 20:15:33 2024 00:23:52.830 read: IOPS=211, BW=845KiB/s (865kB/s)(8460KiB/10013msec) 00:23:52.830 slat (usec): min=3, max=8024, avg=23.74, stdev=246.28 00:23:52.830 clat (msec): min=14, max=143, avg=75.63, stdev=23.51 00:23:52.830 lat (msec): min=14, max=143, avg=75.65, stdev=23.51 00:23:52.830 clat percentiles (msec): 00:23:52.830 | 1.00th=[ 34], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 54], 00:23:52.830 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 81], 00:23:52.830 | 70.00th=[ 93], 80.00th=[ 97], 90.00th=[ 106], 95.00th=[ 112], 00:23:52.830 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 142], 99.95th=[ 144], 00:23:52.830 | 99.99th=[ 144] 00:23:52.830 bw ( KiB/s): min= 528, max= 1056, per=4.14%, avg=840.84, stdev=181.17, samples=19 00:23:52.830 iops : min= 132, max= 264, avg=210.21, stdev=45.29, samples=19 00:23:52.830 lat (msec) : 20=0.14%, 50=18.44%, 100=64.11%, 250=17.30% 00:23:52.830 cpu : usr=36.17%, sys=1.40%, ctx=1122, majf=0, minf=9 00:23:52.830 IO depths : 1=0.1%, 2=2.3%, 4=9.0%, 8=73.9%, 16=14.8%, 32=0.0%, >=64=0.0% 00:23:52.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.830 complete : 0=0.0%, 4=89.6%, 8=8.4%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.830 issued rwts: total=2115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.830 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.830 filename2: (groupid=0, jobs=1): err= 0: pid=79611: Wed Apr 24 20:15:33 2024 00:23:52.830 read: IOPS=220, BW=880KiB/s (901kB/s)(8828KiB/10029msec) 00:23:52.830 slat (usec): min=7, max=7048, avg=21.25, stdev=211.49 00:23:52.830 clat (msec): min=31, max=132, avg=72.53, stdev=20.62 00:23:52.830 lat (msec): min=31, max=132, avg=72.55, stdev=20.61 00:23:52.830 clat percentiles (msec): 00:23:52.830 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 53], 00:23:52.830 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 75], 00:23:52.830 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 103], 95.00th=[ 108], 00:23:52.830 | 99.00th=[ 120], 99.50th=[ 129], 99.90th=[ 132], 99.95th=[ 132], 00:23:52.830 | 99.99th=[ 132] 00:23:52.831 bw ( KiB/s): min= 672, max= 1048, per=4.33%, avg=879.00, stdev=123.01, samples=20 00:23:52.831 iops : min= 168, max= 262, avg=219.70, stdev=30.81, samples=20 00:23:52.831 lat (msec) : 50=17.67%, 100=71.23%, 250=11.10% 00:23:52.831 cpu : usr=38.80%, sys=1.45%, ctx=1175, majf=0, minf=9 00:23:52.831 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.6%, 16=16.3%, 32=0.0%, >=64=0.0% 00:23:52.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.831 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.831 issued rwts: total=2207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.831 filename2: (groupid=0, jobs=1): err= 0: pid=79612: Wed Apr 24 20:15:33 2024 00:23:52.831 read: IOPS=203, BW=814KiB/s (833kB/s)(8172KiB/10042msec) 00:23:52.831 slat (usec): min=4, max=3498, avg=16.57, stdev=77.31 00:23:52.831 clat (msec): min=18, max=145, avg=78.47, stdev=22.58 00:23:52.831 lat (msec): min=18, max=145, avg=78.49, stdev=22.58 00:23:52.831 clat percentiles (msec): 00:23:52.831 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 59], 00:23:52.831 | 30.00th=[ 68], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 86], 00:23:52.831 | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 114], 00:23:52.831 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 146], 00:23:52.831 | 99.99th=[ 146] 00:23:52.831 bw ( KiB/s): min= 528, max= 1016, per=4.00%, avg=812.20, stdev=158.49, samples=20 00:23:52.831 iops : min= 132, max= 254, avg=203.00, stdev=39.67, samples=20 00:23:52.831 lat (msec) : 20=0.78%, 50=10.87%, 100=69.70%, 250=18.65% 00:23:52.831 cpu : usr=38.19%, sys=1.88%, ctx=1285, majf=0, minf=9 00:23:52.831 IO depths : 1=0.1%, 2=2.5%, 4=9.9%, 8=72.4%, 16=15.1%, 32=0.0%, >=64=0.0% 00:23:52.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.831 complete : 0=0.0%, 4=90.2%, 8=7.6%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.831 issued rwts: total=2043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.831 filename2: (groupid=0, jobs=1): err= 0: pid=79613: Wed Apr 24 20:15:33 2024 00:23:52.831 read: IOPS=216, BW=864KiB/s (885kB/s)(8664KiB/10024msec) 00:23:52.831 slat (usec): min=4, max=8026, avg=25.22, stdev=257.95 00:23:52.831 clat (msec): min=19, max=148, avg=73.90, stdev=23.49 00:23:52.831 lat (msec): min=19, max=148, avg=73.93, stdev=23.48 00:23:52.831 clat percentiles (msec): 00:23:52.831 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 46], 20.00th=[ 50], 00:23:52.831 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 74], 00:23:52.831 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 115], 00:23:52.831 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 148], 00:23:52.831 | 99.99th=[ 148] 00:23:52.831 bw ( KiB/s): min= 528, max= 1072, per=4.23%, avg=859.95, stdev=176.50, samples=20 00:23:52.831 iops : min= 132, max= 268, avg=214.95, stdev=44.14, samples=20 00:23:52.831 lat (msec) : 20=0.14%, 50=20.31%, 100=64.77%, 250=14.77% 00:23:52.831 cpu : usr=33.36%, sys=1.45%, ctx=944, majf=0, minf=9 00:23:52.831 IO depths : 1=0.1%, 2=1.2%, 4=5.0%, 8=78.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:23:52.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.831 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.831 issued rwts: total=2166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.831 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:52.831 00:23:52.831 Run status group 0 (all jobs): 00:23:52.831 READ: bw=19.8MiB/s (20.8MB/s), 768KiB/s-900KiB/s (787kB/s-922kB/s), io=200MiB (209MB), run=10010-10064msec 00:23:52.831 20:15:33 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:52.831 20:15:33 -- target/dif.sh@43 -- # local sub 00:23:52.831 20:15:33 -- target/dif.sh@45 -- # for sub in "$@" 00:23:52.831 20:15:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:52.831 20:15:33 -- target/dif.sh@36 -- # local sub_id=0 00:23:52.831 20:15:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:52.831 20:15:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.831 20:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:52.831 20:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.831 20:15:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:52.831 20:15:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.831 20:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:52.831 20:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.831 20:15:33 -- target/dif.sh@45 -- # for sub in "$@" 00:23:52.831 20:15:33 -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:52.831 20:15:33 -- target/dif.sh@36 -- # local sub_id=1 00:23:52.831 20:15:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:52.831 20:15:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.831 20:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:52.831 20:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.831 20:15:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:52.831 20:15:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.831 20:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:52.831 20:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.831 20:15:33 -- target/dif.sh@45 -- # for sub in "$@" 00:23:52.831 20:15:33 -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:52.831 20:15:33 -- target/dif.sh@36 -- # local sub_id=2 00:23:52.831 20:15:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:52.831 20:15:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.831 20:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:52.831 20:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.831 20:15:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:52.831 20:15:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.831 20:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:52.831 20:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.831 20:15:33 -- target/dif.sh@115 -- # NULL_DIF=1 00:23:52.831 20:15:33 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:52.831 20:15:33 -- target/dif.sh@115 -- # numjobs=2 00:23:52.831 20:15:33 -- target/dif.sh@115 -- # iodepth=8 00:23:52.831 20:15:33 -- target/dif.sh@115 -- # runtime=5 00:23:52.831 20:15:33 -- target/dif.sh@115 -- # files=1 00:23:52.831 20:15:33 -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:52.831 20:15:33 -- target/dif.sh@28 -- # local sub 00:23:52.831 20:15:33 -- target/dif.sh@30 -- # for sub in "$@" 00:23:52.831 20:15:33 -- target/dif.sh@31 -- # create_subsystem 0 00:23:52.831 20:15:33 -- target/dif.sh@18 -- # local sub_id=0 00:23:52.831 20:15:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:52.831 20:15:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.831 20:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:52.831 bdev_null0 00:23:52.831 20:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.831 20:15:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:52.831 20:15:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.831 20:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:52.831 20:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.831 20:15:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:52.831 20:15:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.831 20:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:52.831 20:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.831 20:15:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:52.831 20:15:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.831 20:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:52.831 [2024-04-24 20:15:33.694636] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.831 20:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.832 20:15:33 -- target/dif.sh@30 -- # for sub in "$@" 00:23:52.832 20:15:33 -- target/dif.sh@31 -- # create_subsystem 1 00:23:52.832 20:15:33 -- target/dif.sh@18 -- # local sub_id=1 00:23:52.832 20:15:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:52.832 20:15:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.832 20:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:52.832 bdev_null1 00:23:52.832 20:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.832 20:15:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:52.832 20:15:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.832 20:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:52.832 20:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.832 20:15:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:52.832 20:15:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.832 20:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:52.832 20:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.832 20:15:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:52.832 20:15:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.832 20:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:52.832 20:15:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.832 20:15:33 -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:52.832 20:15:33 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:52.832 20:15:33 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:52.832 20:15:33 -- nvmf/common.sh@521 -- # config=() 00:23:52.832 20:15:33 -- nvmf/common.sh@521 -- # local subsystem config 00:23:52.832 20:15:33 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:52.832 20:15:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:52.832 20:15:33 -- target/dif.sh@82 -- # gen_fio_conf 00:23:52.832 20:15:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:52.832 { 00:23:52.832 "params": { 00:23:52.832 "name": "Nvme$subsystem", 00:23:52.832 "trtype": "$TEST_TRANSPORT", 00:23:52.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.832 "adrfam": "ipv4", 00:23:52.832 "trsvcid": "$NVMF_PORT", 00:23:52.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.832 "hdgst": ${hdgst:-false}, 00:23:52.832 "ddgst": ${ddgst:-false} 00:23:52.832 }, 00:23:52.832 "method": "bdev_nvme_attach_controller" 00:23:52.832 } 00:23:52.832 EOF 00:23:52.832 )") 00:23:52.832 20:15:33 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:52.832 20:15:33 -- target/dif.sh@54 -- # local file 00:23:52.832 20:15:33 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:52.832 20:15:33 -- target/dif.sh@56 -- # cat 00:23:52.832 20:15:33 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:52.832 20:15:33 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:52.832 20:15:33 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.832 20:15:33 -- common/autotest_common.sh@1327 -- # shift 00:23:52.832 20:15:33 -- nvmf/common.sh@543 -- # cat 00:23:52.832 20:15:33 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:52.832 20:15:33 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:52.832 20:15:33 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:52.832 20:15:33 -- target/dif.sh@72 -- # (( file <= files )) 00:23:52.832 20:15:33 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.832 20:15:33 -- target/dif.sh@73 -- # cat 00:23:52.832 20:15:33 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:52.832 20:15:33 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:52.832 20:15:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:52.832 20:15:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:52.832 { 00:23:52.832 "params": { 00:23:52.832 "name": "Nvme$subsystem", 00:23:52.832 "trtype": "$TEST_TRANSPORT", 00:23:52.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.832 "adrfam": "ipv4", 00:23:52.832 "trsvcid": "$NVMF_PORT", 00:23:52.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.832 "hdgst": ${hdgst:-false}, 00:23:52.832 "ddgst": ${ddgst:-false} 00:23:52.832 }, 00:23:52.832 "method": "bdev_nvme_attach_controller" 00:23:52.832 } 00:23:52.832 EOF 00:23:52.832 )") 00:23:52.832 20:15:33 -- target/dif.sh@72 -- # (( file++ )) 00:23:52.832 20:15:33 -- target/dif.sh@72 -- # (( file <= files )) 00:23:52.832 20:15:33 -- nvmf/common.sh@543 -- # cat 00:23:52.832 20:15:33 -- nvmf/common.sh@545 -- # jq . 00:23:52.832 20:15:33 -- nvmf/common.sh@546 -- # IFS=, 00:23:52.832 20:15:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:52.832 "params": { 00:23:52.832 "name": "Nvme0", 00:23:52.832 "trtype": "tcp", 00:23:52.832 "traddr": "10.0.0.2", 00:23:52.832 "adrfam": "ipv4", 00:23:52.832 "trsvcid": "4420", 00:23:52.832 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:52.832 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:52.832 "hdgst": false, 00:23:52.832 "ddgst": false 00:23:52.832 }, 00:23:52.832 "method": "bdev_nvme_attach_controller" 00:23:52.832 },{ 00:23:52.832 "params": { 00:23:52.832 "name": "Nvme1", 00:23:52.832 "trtype": "tcp", 00:23:52.832 "traddr": "10.0.0.2", 00:23:52.832 "adrfam": "ipv4", 00:23:52.832 "trsvcid": "4420", 00:23:52.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.832 "hdgst": false, 00:23:52.832 "ddgst": false 00:23:52.832 }, 00:23:52.832 "method": "bdev_nvme_attach_controller" 00:23:52.832 }' 00:23:52.832 20:15:33 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:52.832 20:15:33 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:52.832 20:15:33 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:52.832 20:15:33 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.832 20:15:33 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:52.832 20:15:33 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:52.832 20:15:33 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:52.832 20:15:33 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:52.832 20:15:33 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:52.832 20:15:33 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:52.832 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:52.832 ... 00:23:52.832 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:52.832 ... 00:23:52.832 fio-3.35 00:23:52.832 Starting 4 threads 00:23:58.140 00:23:58.140 filename0: (groupid=0, jobs=1): err= 0: pid=79762: Wed Apr 24 20:15:39 2024 00:23:58.140 read: IOPS=1829, BW=14.3MiB/s (15.0MB/s)(71.5MiB/5002msec) 00:23:58.140 slat (nsec): min=5796, max=92508, avg=11082.24, stdev=4857.60 00:23:58.140 clat (usec): min=1080, max=6371, avg=4325.20, stdev=373.13 00:23:58.140 lat (usec): min=1097, max=6401, avg=4336.28, stdev=372.30 00:23:58.140 clat percentiles (usec): 00:23:58.140 | 1.00th=[ 3294], 5.00th=[ 3621], 10.00th=[ 3982], 20.00th=[ 4146], 00:23:58.140 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4424], 00:23:58.140 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4621], 95.00th=[ 4686], 00:23:58.140 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 5473], 99.95th=[ 5735], 00:23:58.140 | 99.99th=[ 6390] 00:23:58.140 bw ( KiB/s): min=14064, max=16880, per=19.85%, avg=14675.56, stdev=895.67, samples=9 00:23:58.140 iops : min= 1758, max= 2110, avg=1834.44, stdev=111.96, samples=9 00:23:58.140 lat (msec) : 2=0.52%, 4=10.12%, 10=89.36% 00:23:58.140 cpu : usr=92.94%, sys=6.42%, ctx=7, majf=0, minf=0 00:23:58.140 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=24.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.140 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.140 issued rwts: total=9151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.140 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:58.140 filename0: (groupid=0, jobs=1): err= 0: pid=79763: Wed Apr 24 20:15:39 2024 00:23:58.140 read: IOPS=2457, BW=19.2MiB/s (20.1MB/s)(96.0MiB/5001msec) 00:23:58.140 slat (nsec): min=6601, max=48084, avg=14339.67, stdev=3742.49 00:23:58.140 clat (usec): min=692, max=5489, avg=3221.94, stdev=969.18 00:23:58.140 lat (usec): min=702, max=5517, avg=3236.28, stdev=969.19 00:23:58.140 clat percentiles (usec): 00:23:58.140 | 1.00th=[ 1680], 5.00th=[ 1844], 10.00th=[ 1958], 20.00th=[ 2245], 00:23:58.140 | 30.00th=[ 2442], 40.00th=[ 2671], 50.00th=[ 2900], 60.00th=[ 3884], 00:23:58.140 | 70.00th=[ 4113], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4490], 00:23:58.140 | 99.00th=[ 4621], 99.50th=[ 4686], 99.90th=[ 4817], 99.95th=[ 4817], 00:23:58.140 | 99.99th=[ 5014] 00:23:58.140 bw ( KiB/s): min=18368, max=20768, per=26.58%, avg=19653.33, stdev=647.65, samples=9 00:23:58.140 iops : min= 2296, max= 2596, avg=2456.67, stdev=80.96, samples=9 00:23:58.140 lat (usec) : 750=0.01% 00:23:58.140 lat (msec) : 2=12.29%, 4=52.11%, 10=35.59% 00:23:58.140 cpu : usr=94.38%, sys=4.90%, ctx=10, majf=0, minf=10 00:23:58.140 IO depths : 1=0.1%, 2=1.4%, 4=62.9%, 8=35.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.140 complete : 0=0.0%, 4=99.5%, 8=0.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.140 issued rwts: total=12291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.140 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:58.140 filename1: (groupid=0, jobs=1): err= 0: pid=79764: Wed Apr 24 20:15:39 2024 00:23:58.140 read: IOPS=2497, BW=19.5MiB/s (20.5MB/s)(97.6MiB/5002msec) 00:23:58.140 slat (nsec): min=5903, max=42140, avg=12321.83, stdev=4381.69 00:23:58.140 clat (usec): min=672, max=6021, avg=3175.01, stdev=996.01 00:23:58.140 lat (usec): min=680, max=6035, avg=3187.33, stdev=996.29 00:23:58.140 clat percentiles (usec): 00:23:58.140 | 1.00th=[ 1205], 5.00th=[ 1844], 10.00th=[ 1958], 20.00th=[ 2212], 00:23:58.140 | 30.00th=[ 2442], 40.00th=[ 2606], 50.00th=[ 2802], 60.00th=[ 3851], 00:23:58.140 | 70.00th=[ 4113], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4490], 00:23:58.140 | 99.00th=[ 4621], 99.50th=[ 4686], 99.90th=[ 4883], 99.95th=[ 5211], 00:23:58.140 | 99.99th=[ 5276] 00:23:58.140 bw ( KiB/s): min=19184, max=21620, per=27.06%, avg=20009.33, stdev=742.12, samples=9 00:23:58.140 iops : min= 2398, max= 2702, avg=2501.11, stdev=92.63, samples=9 00:23:58.140 lat (usec) : 750=0.02%, 1000=0.01% 00:23:58.140 lat (msec) : 2=12.72%, 4=51.99%, 10=35.27% 00:23:58.140 cpu : usr=94.26%, sys=5.02%, ctx=6, majf=0, minf=9 00:23:58.140 IO depths : 1=0.1%, 2=0.3%, 4=63.5%, 8=36.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.140 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.140 issued rwts: total=12493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.140 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:58.140 filename1: (groupid=0, jobs=1): err= 0: pid=79765: Wed Apr 24 20:15:39 2024 00:23:58.140 read: IOPS=2458, BW=19.2MiB/s (20.1MB/s)(96.0MiB/5001msec) 00:23:58.140 slat (nsec): min=6057, max=82820, avg=14482.81, stdev=4011.32 00:23:58.140 clat (usec): min=564, max=5500, avg=3219.54, stdev=969.92 00:23:58.140 lat (usec): min=571, max=5528, avg=3234.03, stdev=969.31 00:23:58.140 clat percentiles (usec): 00:23:58.140 | 1.00th=[ 1680], 5.00th=[ 1844], 10.00th=[ 1942], 20.00th=[ 2245], 00:23:58.140 | 30.00th=[ 2442], 40.00th=[ 2671], 50.00th=[ 2900], 60.00th=[ 3884], 00:23:58.140 | 70.00th=[ 4113], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4490], 00:23:58.140 | 99.00th=[ 4621], 99.50th=[ 4686], 99.90th=[ 4817], 99.95th=[ 4817], 00:23:58.140 | 99.99th=[ 5014] 00:23:58.140 bw ( KiB/s): min=18404, max=20768, per=26.59%, avg=19657.33, stdev=636.36, samples=9 00:23:58.140 iops : min= 2300, max= 2596, avg=2457.11, stdev=79.67, samples=9 00:23:58.140 lat (usec) : 750=0.03% 00:23:58.140 lat (msec) : 2=12.50%, 4=51.98%, 10=35.49% 00:23:58.140 cpu : usr=94.82%, sys=4.42%, ctx=41, majf=0, minf=9 00:23:58.140 IO depths : 1=0.1%, 2=1.4%, 4=62.9%, 8=35.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.140 complete : 0=0.0%, 4=99.5%, 8=0.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.140 issued rwts: total=12294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.140 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:58.140 00:23:58.140 Run status group 0 (all jobs): 00:23:58.140 READ: bw=72.2MiB/s (75.7MB/s), 14.3MiB/s-19.5MiB/s (15.0MB/s-20.5MB/s), io=361MiB (379MB), run=5001-5002msec 00:23:58.140 20:15:39 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:58.140 20:15:39 -- target/dif.sh@43 -- # local sub 00:23:58.140 20:15:39 -- target/dif.sh@45 -- # for sub in "$@" 00:23:58.140 20:15:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:58.140 20:15:39 -- target/dif.sh@36 -- # local sub_id=0 00:23:58.140 20:15:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:58.140 20:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.140 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:23:58.140 20:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.140 20:15:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:58.140 20:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.141 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:23:58.141 20:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.141 20:15:39 -- target/dif.sh@45 -- # for sub in "$@" 00:23:58.141 20:15:39 -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:58.141 20:15:39 -- target/dif.sh@36 -- # local sub_id=1 00:23:58.141 20:15:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:58.141 20:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.141 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:23:58.141 20:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.141 20:15:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:58.141 20:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.141 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:23:58.141 20:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.141 00:23:58.141 real 0m23.496s 00:23:58.141 user 2m6.388s 00:23:58.141 sys 0m6.453s 00:23:58.141 20:15:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:58.141 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:23:58.141 ************************************ 00:23:58.141 END TEST fio_dif_rand_params 00:23:58.141 ************************************ 00:23:58.141 20:15:39 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:58.141 20:15:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:58.141 20:15:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:58.141 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:23:58.141 ************************************ 00:23:58.141 START TEST fio_dif_digest 00:23:58.141 ************************************ 00:23:58.141 20:15:39 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:23:58.141 20:15:39 -- target/dif.sh@123 -- # local NULL_DIF 00:23:58.141 20:15:39 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:58.141 20:15:39 -- target/dif.sh@125 -- # local hdgst ddgst 00:23:58.141 20:15:39 -- target/dif.sh@127 -- # NULL_DIF=3 00:23:58.141 20:15:39 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:58.141 20:15:39 -- target/dif.sh@127 -- # numjobs=3 00:23:58.141 20:15:39 -- target/dif.sh@127 -- # iodepth=3 00:23:58.141 20:15:39 -- target/dif.sh@127 -- # runtime=10 00:23:58.141 20:15:39 -- target/dif.sh@128 -- # hdgst=true 00:23:58.141 20:15:39 -- target/dif.sh@128 -- # ddgst=true 00:23:58.141 20:15:39 -- target/dif.sh@130 -- # create_subsystems 0 00:23:58.141 20:15:39 -- target/dif.sh@28 -- # local sub 00:23:58.141 20:15:39 -- target/dif.sh@30 -- # for sub in "$@" 00:23:58.141 20:15:39 -- target/dif.sh@31 -- # create_subsystem 0 00:23:58.141 20:15:39 -- target/dif.sh@18 -- # local sub_id=0 00:23:58.141 20:15:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:58.141 20:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.141 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:23:58.141 bdev_null0 00:23:58.141 20:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.141 20:15:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:58.141 20:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.141 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:23:58.141 20:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.141 20:15:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:58.141 20:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.141 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:23:58.141 20:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.141 20:15:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:58.141 20:15:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.141 20:15:39 -- common/autotest_common.sh@10 -- # set +x 00:23:58.141 [2024-04-24 20:15:39.952768] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.141 20:15:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.141 20:15:39 -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:58.141 20:15:39 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:58.141 20:15:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:58.141 20:15:39 -- nvmf/common.sh@521 -- # config=() 00:23:58.141 20:15:39 -- nvmf/common.sh@521 -- # local subsystem config 00:23:58.141 20:15:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.141 20:15:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:58.141 20:15:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:58.141 { 00:23:58.141 "params": { 00:23:58.141 "name": "Nvme$subsystem", 00:23:58.141 "trtype": "$TEST_TRANSPORT", 00:23:58.141 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.141 "adrfam": "ipv4", 00:23:58.141 "trsvcid": "$NVMF_PORT", 00:23:58.141 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.141 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.141 "hdgst": ${hdgst:-false}, 00:23:58.141 "ddgst": ${ddgst:-false} 00:23:58.141 }, 00:23:58.141 "method": "bdev_nvme_attach_controller" 00:23:58.141 } 00:23:58.141 EOF 00:23:58.141 )") 00:23:58.141 20:15:39 -- target/dif.sh@82 -- # gen_fio_conf 00:23:58.141 20:15:39 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.141 20:15:39 -- target/dif.sh@54 -- # local file 00:23:58.141 20:15:39 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:23:58.141 20:15:39 -- target/dif.sh@56 -- # cat 00:23:58.141 20:15:39 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:58.141 20:15:39 -- common/autotest_common.sh@1325 -- # local sanitizers 00:23:58.141 20:15:39 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.141 20:15:39 -- common/autotest_common.sh@1327 -- # shift 00:23:58.141 20:15:39 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:23:58.141 20:15:39 -- nvmf/common.sh@543 -- # cat 00:23:58.141 20:15:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.141 20:15:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:23:58.141 20:15:39 -- nvmf/common.sh@545 -- # jq . 00:23:58.141 20:15:39 -- target/dif.sh@72 -- # (( file <= files )) 00:23:58.141 20:15:39 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.141 20:15:39 -- common/autotest_common.sh@1331 -- # grep libasan 00:23:58.141 20:15:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:58.141 20:15:39 -- nvmf/common.sh@546 -- # IFS=, 00:23:58.141 20:15:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:58.141 "params": { 00:23:58.141 "name": "Nvme0", 00:23:58.141 "trtype": "tcp", 00:23:58.141 "traddr": "10.0.0.2", 00:23:58.141 "adrfam": "ipv4", 00:23:58.141 "trsvcid": "4420", 00:23:58.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.141 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:58.141 "hdgst": true, 00:23:58.141 "ddgst": true 00:23:58.141 }, 00:23:58.141 "method": "bdev_nvme_attach_controller" 00:23:58.141 }' 00:23:58.141 20:15:40 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:58.141 20:15:40 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:58.141 20:15:40 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.141 20:15:40 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.141 20:15:40 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:23:58.141 20:15:40 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:23:58.141 20:15:40 -- common/autotest_common.sh@1331 -- # asan_lib= 00:23:58.141 20:15:40 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:23:58.141 20:15:40 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:58.141 20:15:40 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.141 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:58.141 ... 00:23:58.141 fio-3.35 00:23:58.141 Starting 3 threads 00:24:10.348 00:24:10.348 filename0: (groupid=0, jobs=1): err= 0: pid=79876: Wed Apr 24 20:15:50 2024 00:24:10.348 read: IOPS=241, BW=30.1MiB/s (31.6MB/s)(302MiB/10006msec) 00:24:10.348 slat (nsec): min=6332, max=77998, avg=11189.25, stdev=4976.95 00:24:10.348 clat (usec): min=8007, max=13758, avg=12415.99, stdev=552.62 00:24:10.348 lat (usec): min=8016, max=13776, avg=12427.18, stdev=553.18 00:24:10.348 clat percentiles (usec): 00:24:10.348 | 1.00th=[10552], 5.00th=[11207], 10.00th=[11731], 20.00th=[12125], 00:24:10.348 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12649], 00:24:10.348 | 70.00th=[12649], 80.00th=[12780], 90.00th=[12911], 95.00th=[13042], 00:24:10.348 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13698], 99.95th=[13698], 00:24:10.348 | 99.99th=[13698] 00:24:10.348 bw ( KiB/s): min=29184, max=33724, per=33.36%, avg=30878.11, stdev=1151.92, samples=19 00:24:10.348 iops : min= 228, max= 263, avg=241.21, stdev= 8.94, samples=19 00:24:10.348 lat (msec) : 10=0.12%, 20=99.88% 00:24:10.348 cpu : usr=93.31%, sys=6.19%, ctx=165, majf=0, minf=0 00:24:10.348 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:10.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.348 issued rwts: total=2412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.348 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:10.348 filename0: (groupid=0, jobs=1): err= 0: pid=79877: Wed Apr 24 20:15:50 2024 00:24:10.348 read: IOPS=241, BW=30.1MiB/s (31.6MB/s)(302MiB/10008msec) 00:24:10.348 slat (nsec): min=6240, max=63619, avg=11574.69, stdev=5120.59 00:24:10.348 clat (usec): min=7701, max=13730, avg=12417.15, stdev=550.69 00:24:10.348 lat (usec): min=7709, max=13757, avg=12428.72, stdev=551.30 00:24:10.348 clat percentiles (usec): 00:24:10.348 | 1.00th=[10552], 5.00th=[11207], 10.00th=[11731], 20.00th=[12125], 00:24:10.348 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12518], 60.00th=[12649], 00:24:10.348 | 70.00th=[12649], 80.00th=[12780], 90.00th=[12911], 95.00th=[13042], 00:24:10.348 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13698], 99.95th=[13698], 00:24:10.348 | 99.99th=[13698] 00:24:10.348 bw ( KiB/s): min=29952, max=33792, per=33.37%, avg=30881.68, stdev=1073.34, samples=19 00:24:10.348 iops : min= 234, max= 264, avg=241.26, stdev= 8.39, samples=19 00:24:10.348 lat (msec) : 10=0.12%, 20=99.88% 00:24:10.348 cpu : usr=93.75%, sys=5.79%, ctx=13, majf=0, minf=0 00:24:10.348 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:10.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.348 issued rwts: total=2412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.348 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:10.348 filename0: (groupid=0, jobs=1): err= 0: pid=79878: Wed Apr 24 20:15:50 2024 00:24:10.348 read: IOPS=241, BW=30.1MiB/s (31.6MB/s)(302MiB/10007msec) 00:24:10.348 slat (nsec): min=6353, max=46240, avg=11673.12, stdev=5769.87 00:24:10.348 clat (usec): min=9207, max=13759, avg=12416.08, stdev=536.41 00:24:10.348 lat (usec): min=9215, max=13787, avg=12427.75, stdev=537.41 00:24:10.348 clat percentiles (usec): 00:24:10.348 | 1.00th=[10552], 5.00th=[11207], 10.00th=[11731], 20.00th=[12125], 00:24:10.348 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12518], 60.00th=[12649], 00:24:10.348 | 70.00th=[12649], 80.00th=[12780], 90.00th=[12911], 95.00th=[13042], 00:24:10.348 | 99.00th=[13304], 99.50th=[13435], 99.90th=[13698], 99.95th=[13698], 00:24:10.348 | 99.99th=[13698] 00:24:10.348 bw ( KiB/s): min=29184, max=33792, per=33.37%, avg=30881.68, stdev=1132.75, samples=19 00:24:10.348 iops : min= 228, max= 264, avg=241.26, stdev= 8.85, samples=19 00:24:10.348 lat (msec) : 10=0.12%, 20=99.88% 00:24:10.348 cpu : usr=93.96%, sys=5.59%, ctx=19, majf=0, minf=0 00:24:10.348 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:10.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.348 issued rwts: total=2412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.348 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:10.348 00:24:10.348 Run status group 0 (all jobs): 00:24:10.348 READ: bw=90.4MiB/s (94.8MB/s), 30.1MiB/s-30.1MiB/s (31.6MB/s-31.6MB/s), io=905MiB (948MB), run=10006-10008msec 00:24:10.348 20:15:50 -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:10.348 20:15:50 -- target/dif.sh@43 -- # local sub 00:24:10.348 20:15:50 -- target/dif.sh@45 -- # for sub in "$@" 00:24:10.348 20:15:50 -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:10.348 20:15:50 -- target/dif.sh@36 -- # local sub_id=0 00:24:10.348 20:15:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:10.348 20:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.348 20:15:50 -- common/autotest_common.sh@10 -- # set +x 00:24:10.348 20:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.348 20:15:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:10.348 20:15:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.348 20:15:50 -- common/autotest_common.sh@10 -- # set +x 00:24:10.348 20:15:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.348 00:24:10.348 real 0m11.020s 00:24:10.348 user 0m28.778s 00:24:10.348 sys 0m2.036s 00:24:10.348 20:15:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:10.348 20:15:50 -- common/autotest_common.sh@10 -- # set +x 00:24:10.348 ************************************ 00:24:10.348 END TEST fio_dif_digest 00:24:10.348 ************************************ 00:24:10.348 20:15:50 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:10.348 20:15:50 -- target/dif.sh@147 -- # nvmftestfini 00:24:10.348 20:15:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:10.348 20:15:50 -- nvmf/common.sh@117 -- # sync 00:24:10.348 20:15:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:10.348 20:15:51 -- nvmf/common.sh@120 -- # set +e 00:24:10.348 20:15:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:10.348 20:15:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:10.348 rmmod nvme_tcp 00:24:10.348 rmmod nvme_fabrics 00:24:10.348 rmmod nvme_keyring 00:24:10.348 20:15:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:10.348 20:15:51 -- nvmf/common.sh@124 -- # set -e 00:24:10.348 20:15:51 -- nvmf/common.sh@125 -- # return 0 00:24:10.348 20:15:51 -- nvmf/common.sh@478 -- # '[' -n 79087 ']' 00:24:10.348 20:15:51 -- nvmf/common.sh@479 -- # killprocess 79087 00:24:10.348 20:15:51 -- common/autotest_common.sh@936 -- # '[' -z 79087 ']' 00:24:10.348 20:15:51 -- common/autotest_common.sh@940 -- # kill -0 79087 00:24:10.348 20:15:51 -- common/autotest_common.sh@941 -- # uname 00:24:10.348 20:15:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:10.348 20:15:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79087 00:24:10.348 20:15:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:10.348 20:15:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:10.349 20:15:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79087' 00:24:10.349 killing process with pid 79087 00:24:10.349 20:15:51 -- common/autotest_common.sh@955 -- # kill 79087 00:24:10.349 [2024-04-24 20:15:51.114825] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:10.349 20:15:51 -- common/autotest_common.sh@960 -- # wait 79087 00:24:10.349 20:15:51 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:24:10.349 20:15:51 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:10.349 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:10.349 Waiting for block devices as requested 00:24:10.349 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:10.349 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:10.349 20:15:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:10.349 20:15:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:10.349 20:15:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:10.349 20:15:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:10.349 20:15:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.349 20:15:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:10.349 20:15:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.349 20:15:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:10.349 ************************************ 00:24:10.349 END TEST nvmf_dif 00:24:10.349 ************************************ 00:24:10.349 00:24:10.349 real 1m0.225s 00:24:10.349 user 3m52.793s 00:24:10.349 sys 0m16.299s 00:24:10.349 20:15:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:10.349 20:15:52 -- common/autotest_common.sh@10 -- # set +x 00:24:10.349 20:15:52 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:10.349 20:15:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:10.349 20:15:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:10.349 20:15:52 -- common/autotest_common.sh@10 -- # set +x 00:24:10.349 ************************************ 00:24:10.349 START TEST nvmf_abort_qd_sizes 00:24:10.349 ************************************ 00:24:10.349 20:15:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:10.349 * Looking for test storage... 00:24:10.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:10.349 20:15:52 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:10.349 20:15:52 -- nvmf/common.sh@7 -- # uname -s 00:24:10.349 20:15:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.349 20:15:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.349 20:15:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.349 20:15:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.349 20:15:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.349 20:15:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.349 20:15:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.349 20:15:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.349 20:15:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.349 20:15:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.349 20:15:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:24:10.349 20:15:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:24:10.349 20:15:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.349 20:15:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.349 20:15:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:10.349 20:15:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.349 20:15:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:10.349 20:15:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.349 20:15:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.349 20:15:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.349 20:15:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.349 20:15:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.349 20:15:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.349 20:15:52 -- paths/export.sh@5 -- # export PATH 00:24:10.349 20:15:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.349 20:15:52 -- nvmf/common.sh@47 -- # : 0 00:24:10.349 20:15:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:10.349 20:15:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:10.349 20:15:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.349 20:15:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.349 20:15:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.349 20:15:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:10.349 20:15:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:10.349 20:15:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:10.349 20:15:52 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:10.349 20:15:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:10.349 20:15:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.349 20:15:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:10.349 20:15:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:10.349 20:15:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:10.349 20:15:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.349 20:15:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:10.349 20:15:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.349 20:15:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:10.349 20:15:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:10.349 20:15:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:10.349 20:15:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:10.349 20:15:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:10.349 20:15:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:10.349 20:15:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.349 20:15:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.349 20:15:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:10.349 20:15:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:10.349 20:15:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:10.349 20:15:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:10.349 20:15:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:10.349 20:15:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.349 20:15:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:10.349 20:15:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:10.349 20:15:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:10.349 20:15:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:10.349 20:15:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:10.349 20:15:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:10.349 Cannot find device "nvmf_tgt_br" 00:24:10.349 20:15:52 -- nvmf/common.sh@155 -- # true 00:24:10.349 20:15:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:10.349 Cannot find device "nvmf_tgt_br2" 00:24:10.349 20:15:52 -- nvmf/common.sh@156 -- # true 00:24:10.349 20:15:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:10.349 20:15:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:10.349 Cannot find device "nvmf_tgt_br" 00:24:10.349 20:15:52 -- nvmf/common.sh@158 -- # true 00:24:10.349 20:15:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:10.349 Cannot find device "nvmf_tgt_br2" 00:24:10.349 20:15:52 -- nvmf/common.sh@159 -- # true 00:24:10.349 20:15:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:10.349 20:15:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:10.349 20:15:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:10.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:10.349 20:15:52 -- nvmf/common.sh@162 -- # true 00:24:10.349 20:15:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:10.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:10.349 20:15:52 -- nvmf/common.sh@163 -- # true 00:24:10.349 20:15:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:10.349 20:15:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:10.349 20:15:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:10.609 20:15:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:10.609 20:15:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:10.609 20:15:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:10.609 20:15:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:10.610 20:15:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:10.610 20:15:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:10.610 20:15:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:10.610 20:15:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:10.610 20:15:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:10.610 20:15:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:10.610 20:15:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:10.610 20:15:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:10.610 20:15:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:10.610 20:15:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:10.610 20:15:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:10.610 20:15:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:10.610 20:15:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:10.610 20:15:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:10.610 20:15:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:10.610 20:15:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:10.610 20:15:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:10.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:24:10.610 00:24:10.610 --- 10.0.0.2 ping statistics --- 00:24:10.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.610 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:10.610 20:15:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:10.610 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:10.610 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:24:10.610 00:24:10.610 --- 10.0.0.3 ping statistics --- 00:24:10.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.610 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:24:10.610 20:15:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:10.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:24:10.610 00:24:10.610 --- 10.0.0.1 ping statistics --- 00:24:10.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.610 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:24:10.610 20:15:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.610 20:15:52 -- nvmf/common.sh@422 -- # return 0 00:24:10.610 20:15:52 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:24:10.610 20:15:52 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:11.562 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:11.562 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:11.562 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:11.562 20:15:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.562 20:15:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:11.562 20:15:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:11.562 20:15:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.562 20:15:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:11.562 20:15:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:11.562 20:15:53 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:11.562 20:15:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:11.562 20:15:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:11.562 20:15:53 -- common/autotest_common.sh@10 -- # set +x 00:24:11.562 20:15:53 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:11.562 20:15:53 -- nvmf/common.sh@470 -- # nvmfpid=80479 00:24:11.562 20:15:53 -- nvmf/common.sh@471 -- # waitforlisten 80479 00:24:11.562 20:15:53 -- common/autotest_common.sh@817 -- # '[' -z 80479 ']' 00:24:11.562 20:15:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.562 20:15:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:11.562 20:15:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.562 20:15:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:11.562 20:15:53 -- common/autotest_common.sh@10 -- # set +x 00:24:11.563 [2024-04-24 20:15:53.791973] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:24:11.563 [2024-04-24 20:15:53.792034] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.823 [2024-04-24 20:15:53.931132] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:11.823 [2024-04-24 20:15:54.032374] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.823 [2024-04-24 20:15:54.032511] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.823 [2024-04-24 20:15:54.032564] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.823 [2024-04-24 20:15:54.032584] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.823 [2024-04-24 20:15:54.032589] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.823 [2024-04-24 20:15:54.032754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.823 [2024-04-24 20:15:54.032907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.823 [2024-04-24 20:15:54.034310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.823 [2024-04-24 20:15:54.034311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:12.760 20:15:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:12.760 20:15:54 -- common/autotest_common.sh@850 -- # return 0 00:24:12.760 20:15:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:12.760 20:15:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:12.760 20:15:54 -- common/autotest_common.sh@10 -- # set +x 00:24:12.760 20:15:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.760 20:15:54 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:12.760 20:15:54 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:12.760 20:15:54 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:12.760 20:15:54 -- scripts/common.sh@309 -- # local bdf bdfs 00:24:12.760 20:15:54 -- scripts/common.sh@310 -- # local nvmes 00:24:12.760 20:15:54 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:24:12.760 20:15:54 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:12.760 20:15:54 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:24:12.760 20:15:54 -- scripts/common.sh@295 -- # local bdf= 00:24:12.760 20:15:54 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:24:12.760 20:15:54 -- scripts/common.sh@230 -- # local class 00:24:12.760 20:15:54 -- scripts/common.sh@231 -- # local subclass 00:24:12.760 20:15:54 -- scripts/common.sh@232 -- # local progif 00:24:12.760 20:15:54 -- scripts/common.sh@233 -- # printf %02x 1 00:24:12.760 20:15:54 -- scripts/common.sh@233 -- # class=01 00:24:12.760 20:15:54 -- scripts/common.sh@234 -- # printf %02x 8 00:24:12.760 20:15:54 -- scripts/common.sh@234 -- # subclass=08 00:24:12.760 20:15:54 -- scripts/common.sh@235 -- # printf %02x 2 00:24:12.760 20:15:54 -- scripts/common.sh@235 -- # progif=02 00:24:12.760 20:15:54 -- scripts/common.sh@237 -- # hash lspci 00:24:12.760 20:15:54 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:24:12.760 20:15:54 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:24:12.760 20:15:54 -- scripts/common.sh@240 -- # grep -i -- -p02 00:24:12.760 20:15:54 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:12.760 20:15:54 -- scripts/common.sh@242 -- # tr -d '"' 00:24:12.760 20:15:54 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:12.760 20:15:54 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:24:12.760 20:15:54 -- scripts/common.sh@15 -- # local i 00:24:12.760 20:15:54 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:24:12.760 20:15:54 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:12.760 20:15:54 -- scripts/common.sh@24 -- # return 0 00:24:12.760 20:15:54 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:24:12.760 20:15:54 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:12.760 20:15:54 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:24:12.760 20:15:54 -- scripts/common.sh@15 -- # local i 00:24:12.760 20:15:54 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:24:12.760 20:15:54 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:12.760 20:15:54 -- scripts/common.sh@24 -- # return 0 00:24:12.760 20:15:54 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:24:12.760 20:15:54 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:12.760 20:15:54 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:12.760 20:15:54 -- scripts/common.sh@320 -- # uname -s 00:24:12.760 20:15:54 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:12.760 20:15:54 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:12.760 20:15:54 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:12.760 20:15:54 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:12.760 20:15:54 -- scripts/common.sh@320 -- # uname -s 00:24:12.760 20:15:54 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:12.760 20:15:54 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:12.760 20:15:54 -- scripts/common.sh@325 -- # (( 2 )) 00:24:12.760 20:15:54 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:12.760 20:15:54 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:12.760 20:15:54 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:12.760 20:15:54 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:12.760 20:15:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:12.760 20:15:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:12.760 20:15:54 -- common/autotest_common.sh@10 -- # set +x 00:24:12.760 ************************************ 00:24:12.760 START TEST spdk_target_abort 00:24:12.760 ************************************ 00:24:12.760 20:15:54 -- common/autotest_common.sh@1111 -- # spdk_target 00:24:12.760 20:15:54 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:12.760 20:15:54 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:12.760 20:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.760 20:15:54 -- common/autotest_common.sh@10 -- # set +x 00:24:12.760 spdk_targetn1 00:24:12.760 20:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.760 20:15:54 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:12.760 20:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.760 20:15:54 -- common/autotest_common.sh@10 -- # set +x 00:24:12.760 [2024-04-24 20:15:54.987114] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.760 20:15:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.760 20:15:54 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:12.760 20:15:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.760 20:15:54 -- common/autotest_common.sh@10 -- # set +x 00:24:12.760 20:15:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.760 20:15:55 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:12.760 20:15:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.760 20:15:55 -- common/autotest_common.sh@10 -- # set +x 00:24:13.018 20:15:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:24:13.018 20:15:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.018 20:15:55 -- common/autotest_common.sh@10 -- # set +x 00:24:13.018 [2024-04-24 20:15:55.027433] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:13.018 [2024-04-24 20:15:55.027863] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.018 20:15:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:13.018 20:15:55 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:16.304 Initializing NVMe Controllers 00:24:16.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:16.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:16.304 Initialization complete. Launching workers. 00:24:16.304 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11798, failed: 0 00:24:16.304 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1117, failed to submit 10681 00:24:16.304 success 781, unsuccess 336, failed 0 00:24:16.304 20:15:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:16.304 20:15:58 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:19.594 Initializing NVMe Controllers 00:24:19.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:19.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:19.594 Initialization complete. Launching workers. 00:24:19.594 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8915, failed: 0 00:24:19.594 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1149, failed to submit 7766 00:24:19.594 success 393, unsuccess 756, failed 0 00:24:19.594 20:16:01 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:19.594 20:16:01 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:22.891 Initializing NVMe Controllers 00:24:22.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:24:22.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:22.891 Initialization complete. Launching workers. 00:24:22.891 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31912, failed: 0 00:24:22.891 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2402, failed to submit 29510 00:24:22.891 success 508, unsuccess 1894, failed 0 00:24:22.891 20:16:04 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:22.891 20:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.891 20:16:04 -- common/autotest_common.sh@10 -- # set +x 00:24:22.891 20:16:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.891 20:16:04 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:22.891 20:16:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.891 20:16:04 -- common/autotest_common.sh@10 -- # set +x 00:24:24.268 20:16:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.268 20:16:06 -- target/abort_qd_sizes.sh@61 -- # killprocess 80479 00:24:24.268 20:16:06 -- common/autotest_common.sh@936 -- # '[' -z 80479 ']' 00:24:24.268 20:16:06 -- common/autotest_common.sh@940 -- # kill -0 80479 00:24:24.268 20:16:06 -- common/autotest_common.sh@941 -- # uname 00:24:24.268 20:16:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:24.268 20:16:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80479 00:24:24.268 killing process with pid 80479 00:24:24.268 20:16:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:24.268 20:16:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:24.268 20:16:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80479' 00:24:24.268 20:16:06 -- common/autotest_common.sh@955 -- # kill 80479 00:24:24.268 [2024-04-24 20:16:06.357143] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:24.268 20:16:06 -- common/autotest_common.sh@960 -- # wait 80479 00:24:24.584 00:24:24.585 real 0m11.674s 00:24:24.585 user 0m47.734s 00:24:24.585 sys 0m1.846s 00:24:24.585 20:16:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:24.585 20:16:06 -- common/autotest_common.sh@10 -- # set +x 00:24:24.585 ************************************ 00:24:24.585 END TEST spdk_target_abort 00:24:24.585 ************************************ 00:24:24.585 20:16:06 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:24.585 20:16:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:24.585 20:16:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:24.585 20:16:06 -- common/autotest_common.sh@10 -- # set +x 00:24:24.585 ************************************ 00:24:24.585 START TEST kernel_target_abort 00:24:24.585 ************************************ 00:24:24.585 20:16:06 -- common/autotest_common.sh@1111 -- # kernel_target 00:24:24.585 20:16:06 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:24.585 20:16:06 -- nvmf/common.sh@717 -- # local ip 00:24:24.585 20:16:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.585 20:16:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.585 20:16:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.585 20:16:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.585 20:16:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.585 20:16:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.585 20:16:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.585 20:16:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.585 20:16:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.585 20:16:06 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:24.585 20:16:06 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:24.585 20:16:06 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:24.585 20:16:06 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:24.585 20:16:06 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:24.585 20:16:06 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:24.585 20:16:06 -- nvmf/common.sh@628 -- # local block nvme 00:24:24.585 20:16:06 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:24.585 20:16:06 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:24.585 20:16:06 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:24.585 20:16:06 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:25.152 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:25.152 Waiting for block devices as requested 00:24:25.152 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:25.152 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:25.411 20:16:07 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:25.411 20:16:07 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:25.411 20:16:07 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:25.411 20:16:07 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:25.411 20:16:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:25.411 20:16:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:25.411 20:16:07 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:25.411 20:16:07 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:25.411 20:16:07 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:25.411 No valid GPT data, bailing 00:24:25.411 20:16:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:25.411 20:16:07 -- scripts/common.sh@391 -- # pt= 00:24:25.411 20:16:07 -- scripts/common.sh@392 -- # return 1 00:24:25.411 20:16:07 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:25.411 20:16:07 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:25.411 20:16:07 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:25.411 20:16:07 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:24:25.411 20:16:07 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:24:25.411 20:16:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:25.411 20:16:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:25.411 20:16:07 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:24:25.411 20:16:07 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:24:25.411 20:16:07 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:25.411 No valid GPT data, bailing 00:24:25.411 20:16:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:25.411 20:16:07 -- scripts/common.sh@391 -- # pt= 00:24:25.411 20:16:07 -- scripts/common.sh@392 -- # return 1 00:24:25.411 20:16:07 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:24:25.411 20:16:07 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:25.411 20:16:07 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:25.411 20:16:07 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:24:25.411 20:16:07 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:24:25.411 20:16:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:25.411 20:16:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:25.411 20:16:07 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:24:25.411 20:16:07 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:24:25.411 20:16:07 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:25.411 No valid GPT data, bailing 00:24:25.411 20:16:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:25.411 20:16:07 -- scripts/common.sh@391 -- # pt= 00:24:25.411 20:16:07 -- scripts/common.sh@392 -- # return 1 00:24:25.411 20:16:07 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:24:25.411 20:16:07 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:25.411 20:16:07 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:25.411 20:16:07 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:24:25.411 20:16:07 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:24:25.411 20:16:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:25.411 20:16:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:25.411 20:16:07 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:24:25.411 20:16:07 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:24:25.411 20:16:07 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:25.670 No valid GPT data, bailing 00:24:25.670 20:16:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:25.670 20:16:07 -- scripts/common.sh@391 -- # pt= 00:24:25.670 20:16:07 -- scripts/common.sh@392 -- # return 1 00:24:25.670 20:16:07 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:24:25.670 20:16:07 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:24:25.670 20:16:07 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:25.670 20:16:07 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:25.670 20:16:07 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:25.670 20:16:07 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:25.670 20:16:07 -- nvmf/common.sh@656 -- # echo 1 00:24:25.670 20:16:07 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:24:25.670 20:16:07 -- nvmf/common.sh@658 -- # echo 1 00:24:25.670 20:16:07 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:25.670 20:16:07 -- nvmf/common.sh@661 -- # echo tcp 00:24:25.670 20:16:07 -- nvmf/common.sh@662 -- # echo 4420 00:24:25.670 20:16:07 -- nvmf/common.sh@663 -- # echo ipv4 00:24:25.670 20:16:07 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:25.670 20:16:07 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf --hostid=19152f61-83a6-4d7e-88f6-d601ac0cc1cf -a 10.0.0.1 -t tcp -s 4420 00:24:25.670 00:24:25.670 Discovery Log Number of Records 2, Generation counter 2 00:24:25.670 =====Discovery Log Entry 0====== 00:24:25.670 trtype: tcp 00:24:25.670 adrfam: ipv4 00:24:25.670 subtype: current discovery subsystem 00:24:25.670 treq: not specified, sq flow control disable supported 00:24:25.670 portid: 1 00:24:25.670 trsvcid: 4420 00:24:25.670 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:25.670 traddr: 10.0.0.1 00:24:25.670 eflags: none 00:24:25.670 sectype: none 00:24:25.670 =====Discovery Log Entry 1====== 00:24:25.670 trtype: tcp 00:24:25.670 adrfam: ipv4 00:24:25.670 subtype: nvme subsystem 00:24:25.670 treq: not specified, sq flow control disable supported 00:24:25.670 portid: 1 00:24:25.670 trsvcid: 4420 00:24:25.670 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:25.670 traddr: 10.0.0.1 00:24:25.670 eflags: none 00:24:25.670 sectype: none 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:25.670 20:16:07 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:28.980 Initializing NVMe Controllers 00:24:28.980 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:28.980 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:28.980 Initialization complete. Launching workers. 00:24:28.980 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40305, failed: 0 00:24:28.980 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 40305, failed to submit 0 00:24:28.980 success 0, unsuccess 40305, failed 0 00:24:28.980 20:16:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:28.980 20:16:10 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:32.275 Initializing NVMe Controllers 00:24:32.275 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:32.275 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:32.275 Initialization complete. Launching workers. 00:24:32.275 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80370, failed: 0 00:24:32.275 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37415, failed to submit 42955 00:24:32.275 success 0, unsuccess 37415, failed 0 00:24:32.275 20:16:14 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:32.275 20:16:14 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:35.620 Initializing NVMe Controllers 00:24:35.620 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:35.620 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:35.620 Initialization complete. Launching workers. 00:24:35.620 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 99894, failed: 0 00:24:35.620 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25058, failed to submit 74836 00:24:35.620 success 0, unsuccess 25058, failed 0 00:24:35.620 20:16:17 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:35.620 20:16:17 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:35.620 20:16:17 -- nvmf/common.sh@675 -- # echo 0 00:24:35.620 20:16:17 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:35.620 20:16:17 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:35.620 20:16:17 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:35.620 20:16:17 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:35.620 20:16:17 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:35.620 20:16:17 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:35.620 20:16:17 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:35.879 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:42.453 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:42.453 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:42.453 00:24:42.453 real 0m17.799s 00:24:42.453 user 0m6.916s 00:24:42.453 sys 0m8.618s 00:24:42.453 20:16:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:42.453 20:16:24 -- common/autotest_common.sh@10 -- # set +x 00:24:42.453 ************************************ 00:24:42.453 END TEST kernel_target_abort 00:24:42.453 ************************************ 00:24:42.453 20:16:24 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:42.453 20:16:24 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:42.453 20:16:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:42.453 20:16:24 -- nvmf/common.sh@117 -- # sync 00:24:42.453 20:16:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:42.453 20:16:24 -- nvmf/common.sh@120 -- # set +e 00:24:42.453 20:16:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:42.453 20:16:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:42.453 rmmod nvme_tcp 00:24:42.453 rmmod nvme_fabrics 00:24:42.453 rmmod nvme_keyring 00:24:42.453 20:16:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:42.453 20:16:24 -- nvmf/common.sh@124 -- # set -e 00:24:42.453 20:16:24 -- nvmf/common.sh@125 -- # return 0 00:24:42.453 20:16:24 -- nvmf/common.sh@478 -- # '[' -n 80479 ']' 00:24:42.453 20:16:24 -- nvmf/common.sh@479 -- # killprocess 80479 00:24:42.453 20:16:24 -- common/autotest_common.sh@936 -- # '[' -z 80479 ']' 00:24:42.453 20:16:24 -- common/autotest_common.sh@940 -- # kill -0 80479 00:24:42.453 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (80479) - No such process 00:24:42.453 Process with pid 80479 is not found 00:24:42.453 20:16:24 -- common/autotest_common.sh@963 -- # echo 'Process with pid 80479 is not found' 00:24:42.453 20:16:24 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:24:42.453 20:16:24 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:43.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:43.020 Waiting for block devices as requested 00:24:43.020 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:43.279 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:43.279 20:16:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:43.279 20:16:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:43.279 20:16:25 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.279 20:16:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:43.279 20:16:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.279 20:16:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:43.279 20:16:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.279 20:16:25 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:43.279 00:24:43.279 real 0m33.213s 00:24:43.279 user 0m55.940s 00:24:43.279 sys 0m12.269s 00:24:43.279 20:16:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:43.279 20:16:25 -- common/autotest_common.sh@10 -- # set +x 00:24:43.279 ************************************ 00:24:43.279 END TEST nvmf_abort_qd_sizes 00:24:43.279 ************************************ 00:24:43.279 20:16:25 -- spdk/autotest.sh@293 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:43.279 20:16:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:43.279 20:16:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:43.279 20:16:25 -- common/autotest_common.sh@10 -- # set +x 00:24:43.537 ************************************ 00:24:43.537 START TEST keyring_file 00:24:43.537 ************************************ 00:24:43.537 20:16:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:43.537 * Looking for test storage... 00:24:43.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:43.537 20:16:25 -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:43.537 20:16:25 -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:43.537 20:16:25 -- nvmf/common.sh@7 -- # uname -s 00:24:43.537 20:16:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.537 20:16:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.537 20:16:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.537 20:16:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.537 20:16:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.537 20:16:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.537 20:16:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.537 20:16:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.537 20:16:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.537 20:16:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.537 20:16:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:24:43.537 20:16:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=19152f61-83a6-4d7e-88f6-d601ac0cc1cf 00:24:43.537 20:16:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.537 20:16:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.537 20:16:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:43.537 20:16:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.537 20:16:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:43.537 20:16:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.537 20:16:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.537 20:16:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.537 20:16:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.537 20:16:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.537 20:16:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.537 20:16:25 -- paths/export.sh@5 -- # export PATH 00:24:43.537 20:16:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.537 20:16:25 -- nvmf/common.sh@47 -- # : 0 00:24:43.537 20:16:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:43.538 20:16:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:43.538 20:16:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.538 20:16:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.538 20:16:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.538 20:16:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:43.538 20:16:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:43.538 20:16:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:43.538 20:16:25 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:43.538 20:16:25 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:43.538 20:16:25 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:43.538 20:16:25 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:43.538 20:16:25 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:43.538 20:16:25 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:43.538 20:16:25 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:43.538 20:16:25 -- keyring/common.sh@15 -- # local name key digest path 00:24:43.538 20:16:25 -- keyring/common.sh@17 -- # name=key0 00:24:43.538 20:16:25 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:43.538 20:16:25 -- keyring/common.sh@17 -- # digest=0 00:24:43.538 20:16:25 -- keyring/common.sh@18 -- # mktemp 00:24:43.538 20:16:25 -- keyring/common.sh@18 -- # path=/tmp/tmp.vPr0Zdzhaq 00:24:43.538 20:16:25 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:43.538 20:16:25 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:43.538 20:16:25 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:43.538 20:16:25 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:43.538 20:16:25 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:24:43.538 20:16:25 -- nvmf/common.sh@693 -- # digest=0 00:24:43.538 20:16:25 -- nvmf/common.sh@694 -- # python - 00:24:43.797 20:16:25 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vPr0Zdzhaq 00:24:43.797 20:16:25 -- keyring/common.sh@23 -- # echo /tmp/tmp.vPr0Zdzhaq 00:24:43.797 20:16:25 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.vPr0Zdzhaq 00:24:43.797 20:16:25 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:43.797 20:16:25 -- keyring/common.sh@15 -- # local name key digest path 00:24:43.797 20:16:25 -- keyring/common.sh@17 -- # name=key1 00:24:43.797 20:16:25 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:43.797 20:16:25 -- keyring/common.sh@17 -- # digest=0 00:24:43.797 20:16:25 -- keyring/common.sh@18 -- # mktemp 00:24:43.797 20:16:25 -- keyring/common.sh@18 -- # path=/tmp/tmp.5VmcPVyM9G 00:24:43.797 20:16:25 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:43.797 20:16:25 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:43.797 20:16:25 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:43.797 20:16:25 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:43.797 20:16:25 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:24:43.797 20:16:25 -- nvmf/common.sh@693 -- # digest=0 00:24:43.797 20:16:25 -- nvmf/common.sh@694 -- # python - 00:24:43.797 20:16:25 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5VmcPVyM9G 00:24:43.797 20:16:25 -- keyring/common.sh@23 -- # echo /tmp/tmp.5VmcPVyM9G 00:24:43.797 20:16:25 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.5VmcPVyM9G 00:24:43.797 20:16:25 -- keyring/file.sh@30 -- # tgtpid=81378 00:24:43.797 20:16:25 -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:43.797 20:16:25 -- keyring/file.sh@32 -- # waitforlisten 81378 00:24:43.797 20:16:25 -- common/autotest_common.sh@817 -- # '[' -z 81378 ']' 00:24:43.797 20:16:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.797 20:16:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:43.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.797 20:16:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.797 20:16:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:43.797 20:16:25 -- common/autotest_common.sh@10 -- # set +x 00:24:43.797 [2024-04-24 20:16:25.921486] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:24:43.797 [2024-04-24 20:16:25.921555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81378 ] 00:24:43.797 [2024-04-24 20:16:26.042478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.057 [2024-04-24 20:16:26.143641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.626 20:16:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:44.626 20:16:26 -- common/autotest_common.sh@850 -- # return 0 00:24:44.627 20:16:26 -- keyring/file.sh@33 -- # rpc_cmd 00:24:44.627 20:16:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.627 20:16:26 -- common/autotest_common.sh@10 -- # set +x 00:24:44.627 [2024-04-24 20:16:26.774347] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.627 null0 00:24:44.627 [2024-04-24 20:16:26.806230] nvmf_rpc.c: 621:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:44.627 [2024-04-24 20:16:26.806298] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:44.627 [2024-04-24 20:16:26.806479] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:44.627 [2024-04-24 20:16:26.814231] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:44.627 20:16:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.627 20:16:26 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:44.627 20:16:26 -- common/autotest_common.sh@638 -- # local es=0 00:24:44.627 20:16:26 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:44.627 20:16:26 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:44.627 20:16:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:44.627 20:16:26 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:44.627 20:16:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:44.627 20:16:26 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:44.627 20:16:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.627 20:16:26 -- common/autotest_common.sh@10 -- # set +x 00:24:44.627 [2024-04-24 20:16:26.826228] nvmf_rpc.c: 779:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:44.627 request: 00:24:44.627 { 00:24:44.627 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:44.627 "secure_channel": false, 00:24:44.627 "listen_address": { 00:24:44.627 "trtype": "tcp", 00:24:44.627 "traddr": "127.0.0.1", 00:24:44.627 "trsvcid": "4420" 00:24:44.627 }, 00:24:44.627 "method": "nvmf_subsystem_add_listener", 00:24:44.627 "req_id": 1 00:24:44.627 } 00:24:44.627 Got JSON-RPC error response 00:24:44.627 response: 00:24:44.627 { 00:24:44.627 "code": -32602, 00:24:44.627 "message": "Invalid parameters" 00:24:44.627 } 00:24:44.627 20:16:26 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:44.627 20:16:26 -- common/autotest_common.sh@641 -- # es=1 00:24:44.627 20:16:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:44.627 20:16:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:44.627 20:16:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:44.627 20:16:26 -- keyring/file.sh@46 -- # bperfpid=81395 00:24:44.627 20:16:26 -- keyring/file.sh@48 -- # waitforlisten 81395 /var/tmp/bperf.sock 00:24:44.627 20:16:26 -- common/autotest_common.sh@817 -- # '[' -z 81395 ']' 00:24:44.627 20:16:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:44.627 20:16:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:44.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:44.627 20:16:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:44.627 20:16:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:44.627 20:16:26 -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:44.627 20:16:26 -- common/autotest_common.sh@10 -- # set +x 00:24:44.886 [2024-04-24 20:16:26.882604] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:24:44.886 [2024-04-24 20:16:26.882714] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81395 ] 00:24:44.886 [2024-04-24 20:16:27.004482] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.886 [2024-04-24 20:16:27.096782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.820 20:16:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:45.820 20:16:27 -- common/autotest_common.sh@850 -- # return 0 00:24:45.820 20:16:27 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vPr0Zdzhaq 00:24:45.821 20:16:27 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vPr0Zdzhaq 00:24:45.821 20:16:27 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5VmcPVyM9G 00:24:45.821 20:16:27 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5VmcPVyM9G 00:24:46.080 20:16:28 -- keyring/file.sh@51 -- # get_key key0 00:24:46.080 20:16:28 -- keyring/file.sh@51 -- # jq -r .path 00:24:46.080 20:16:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:46.080 20:16:28 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:46.080 20:16:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:46.080 20:16:28 -- keyring/file.sh@51 -- # [[ /tmp/tmp.vPr0Zdzhaq == \/\t\m\p\/\t\m\p\.\v\P\r\0\Z\d\z\h\a\q ]] 00:24:46.080 20:16:28 -- keyring/file.sh@52 -- # get_key key1 00:24:46.080 20:16:28 -- keyring/file.sh@52 -- # jq -r .path 00:24:46.080 20:16:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:46.080 20:16:28 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:46.080 20:16:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:46.339 20:16:28 -- keyring/file.sh@52 -- # [[ /tmp/tmp.5VmcPVyM9G == \/\t\m\p\/\t\m\p\.\5\V\m\c\P\V\y\M\9\G ]] 00:24:46.339 20:16:28 -- keyring/file.sh@53 -- # get_refcnt key0 00:24:46.339 20:16:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:46.339 20:16:28 -- keyring/common.sh@12 -- # get_key key0 00:24:46.339 20:16:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:46.339 20:16:28 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:46.339 20:16:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:46.598 20:16:28 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:24:46.598 20:16:28 -- keyring/file.sh@54 -- # get_refcnt key1 00:24:46.598 20:16:28 -- keyring/common.sh@12 -- # get_key key1 00:24:46.598 20:16:28 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:46.598 20:16:28 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:46.598 20:16:28 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:46.598 20:16:28 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:46.856 20:16:28 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:46.856 20:16:28 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:46.856 20:16:28 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:47.161 [2024-04-24 20:16:29.179784] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:47.161 nvme0n1 00:24:47.161 20:16:29 -- keyring/file.sh@59 -- # get_refcnt key0 00:24:47.161 20:16:29 -- keyring/common.sh@12 -- # get_key key0 00:24:47.161 20:16:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:47.161 20:16:29 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:47.161 20:16:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:47.161 20:16:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:47.422 20:16:29 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:24:47.422 20:16:29 -- keyring/file.sh@60 -- # get_refcnt key1 00:24:47.422 20:16:29 -- keyring/common.sh@12 -- # get_key key1 00:24:47.422 20:16:29 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:47.422 20:16:29 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:47.422 20:16:29 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:47.422 20:16:29 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:47.681 20:16:29 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:24:47.681 20:16:29 -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:47.681 Running I/O for 1 seconds... 00:24:48.617 00:24:48.617 Latency(us) 00:24:48.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.617 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:48.617 nvme0n1 : 1.01 15455.76 60.37 0.00 0.00 8243.86 5380.25 22894.67 00:24:48.617 =================================================================================================================== 00:24:48.617 Total : 15455.76 60.37 0.00 0.00 8243.86 5380.25 22894.67 00:24:48.617 0 00:24:48.618 20:16:30 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:48.618 20:16:30 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:48.877 20:16:31 -- keyring/file.sh@65 -- # get_refcnt key0 00:24:48.877 20:16:31 -- keyring/common.sh@12 -- # get_key key0 00:24:48.877 20:16:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:48.877 20:16:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:48.877 20:16:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:48.877 20:16:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:49.136 20:16:31 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:24:49.136 20:16:31 -- keyring/file.sh@66 -- # get_refcnt key1 00:24:49.136 20:16:31 -- keyring/common.sh@12 -- # get_key key1 00:24:49.136 20:16:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:49.136 20:16:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:49.136 20:16:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:49.136 20:16:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:49.395 20:16:31 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:49.395 20:16:31 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:49.395 20:16:31 -- common/autotest_common.sh@638 -- # local es=0 00:24:49.395 20:16:31 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:49.395 20:16:31 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:24:49.395 20:16:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:49.395 20:16:31 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:24:49.395 20:16:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:49.395 20:16:31 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:49.395 20:16:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:49.655 [2024-04-24 20:16:31.740418] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:49.655 [2024-04-24 20:16:31.740771] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e663c0 (107): Transport endpoint is not connected 00:24:49.655 [2024-04-24 20:16:31.741762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e663c0 (9): Bad file descriptor 00:24:49.655 [2024-04-24 20:16:31.742755] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:49.655 [2024-04-24 20:16:31.742799] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:49.655 [2024-04-24 20:16:31.742806] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:49.655 request: 00:24:49.655 { 00:24:49.655 "name": "nvme0", 00:24:49.655 "trtype": "tcp", 00:24:49.655 "traddr": "127.0.0.1", 00:24:49.655 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:49.655 "adrfam": "ipv4", 00:24:49.655 "trsvcid": "4420", 00:24:49.655 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:49.655 "psk": "key1", 00:24:49.655 "method": "bdev_nvme_attach_controller", 00:24:49.655 "req_id": 1 00:24:49.655 } 00:24:49.655 Got JSON-RPC error response 00:24:49.655 response: 00:24:49.655 { 00:24:49.655 "code": -32602, 00:24:49.655 "message": "Invalid parameters" 00:24:49.655 } 00:24:49.655 20:16:31 -- common/autotest_common.sh@641 -- # es=1 00:24:49.655 20:16:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:49.655 20:16:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:49.655 20:16:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:49.655 20:16:31 -- keyring/file.sh@71 -- # get_refcnt key0 00:24:49.655 20:16:31 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:49.655 20:16:31 -- keyring/common.sh@12 -- # get_key key0 00:24:49.655 20:16:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:49.655 20:16:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:49.655 20:16:31 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:49.915 20:16:32 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:24:49.915 20:16:32 -- keyring/file.sh@72 -- # get_refcnt key1 00:24:49.915 20:16:32 -- keyring/common.sh@12 -- # get_key key1 00:24:49.915 20:16:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:49.915 20:16:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:49.915 20:16:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:49.915 20:16:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:50.175 20:16:32 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:50.175 20:16:32 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:24:50.175 20:16:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:50.435 20:16:32 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:24:50.435 20:16:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:50.435 20:16:32 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:24:50.435 20:16:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:50.435 20:16:32 -- keyring/file.sh@77 -- # jq length 00:24:50.694 20:16:32 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:24:50.694 20:16:32 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.vPr0Zdzhaq 00:24:50.694 20:16:32 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.vPr0Zdzhaq 00:24:50.694 20:16:32 -- common/autotest_common.sh@638 -- # local es=0 00:24:50.694 20:16:32 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.vPr0Zdzhaq 00:24:50.694 20:16:32 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:24:50.694 20:16:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:50.694 20:16:32 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:24:50.694 20:16:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:50.694 20:16:32 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vPr0Zdzhaq 00:24:50.694 20:16:32 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vPr0Zdzhaq 00:24:50.956 [2024-04-24 20:16:33.056776] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vPr0Zdzhaq': 0100660 00:24:50.956 [2024-04-24 20:16:33.056822] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:50.956 request: 00:24:50.956 { 00:24:50.956 "name": "key0", 00:24:50.956 "path": "/tmp/tmp.vPr0Zdzhaq", 00:24:50.956 "method": "keyring_file_add_key", 00:24:50.956 "req_id": 1 00:24:50.956 } 00:24:50.956 Got JSON-RPC error response 00:24:50.956 response: 00:24:50.956 { 00:24:50.956 "code": -1, 00:24:50.956 "message": "Operation not permitted" 00:24:50.956 } 00:24:50.956 20:16:33 -- common/autotest_common.sh@641 -- # es=1 00:24:50.956 20:16:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:50.956 20:16:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:50.956 20:16:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:50.956 20:16:33 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.vPr0Zdzhaq 00:24:50.956 20:16:33 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vPr0Zdzhaq 00:24:50.957 20:16:33 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vPr0Zdzhaq 00:24:51.215 20:16:33 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.vPr0Zdzhaq 00:24:51.215 20:16:33 -- keyring/file.sh@88 -- # get_refcnt key0 00:24:51.215 20:16:33 -- keyring/common.sh@12 -- # get_key key0 00:24:51.215 20:16:33 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:51.215 20:16:33 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:51.215 20:16:33 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:51.215 20:16:33 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:51.474 20:16:33 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:24:51.474 20:16:33 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:51.474 20:16:33 -- common/autotest_common.sh@638 -- # local es=0 00:24:51.474 20:16:33 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:51.474 20:16:33 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:24:51.474 20:16:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:51.474 20:16:33 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:24:51.474 20:16:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:51.474 20:16:33 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:51.474 20:16:33 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:51.474 [2024-04-24 20:16:33.691709] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.vPr0Zdzhaq': No such file or directory 00:24:51.474 [2024-04-24 20:16:33.691745] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:51.474 [2024-04-24 20:16:33.691784] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:51.474 [2024-04-24 20:16:33.691791] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:51.474 [2024-04-24 20:16:33.691808] bdev_nvme.c:6191:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:51.474 request: 00:24:51.474 { 00:24:51.474 "name": "nvme0", 00:24:51.474 "trtype": "tcp", 00:24:51.474 "traddr": "127.0.0.1", 00:24:51.474 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:51.474 "adrfam": "ipv4", 00:24:51.474 "trsvcid": "4420", 00:24:51.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:51.474 "psk": "key0", 00:24:51.474 "method": "bdev_nvme_attach_controller", 00:24:51.474 "req_id": 1 00:24:51.474 } 00:24:51.474 Got JSON-RPC error response 00:24:51.474 response: 00:24:51.474 { 00:24:51.474 "code": -19, 00:24:51.474 "message": "No such device" 00:24:51.474 } 00:24:51.474 20:16:33 -- common/autotest_common.sh@641 -- # es=1 00:24:51.474 20:16:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:51.474 20:16:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:51.474 20:16:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:51.474 20:16:33 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:24:51.474 20:16:33 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:51.741 20:16:33 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:51.741 20:16:33 -- keyring/common.sh@15 -- # local name key digest path 00:24:51.741 20:16:33 -- keyring/common.sh@17 -- # name=key0 00:24:51.741 20:16:33 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:51.741 20:16:33 -- keyring/common.sh@17 -- # digest=0 00:24:51.741 20:16:33 -- keyring/common.sh@18 -- # mktemp 00:24:51.741 20:16:33 -- keyring/common.sh@18 -- # path=/tmp/tmp.I9lS2efn6j 00:24:51.741 20:16:33 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:51.741 20:16:33 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:51.741 20:16:33 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:51.741 20:16:33 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:51.741 20:16:33 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:24:51.741 20:16:33 -- nvmf/common.sh@693 -- # digest=0 00:24:51.741 20:16:33 -- nvmf/common.sh@694 -- # python - 00:24:52.023 20:16:34 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.I9lS2efn6j 00:24:52.023 20:16:34 -- keyring/common.sh@23 -- # echo /tmp/tmp.I9lS2efn6j 00:24:52.023 20:16:34 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.I9lS2efn6j 00:24:52.023 20:16:34 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.I9lS2efn6j 00:24:52.023 20:16:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.I9lS2efn6j 00:24:52.023 20:16:34 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:52.023 20:16:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:52.283 nvme0n1 00:24:52.283 20:16:34 -- keyring/file.sh@99 -- # get_refcnt key0 00:24:52.283 20:16:34 -- keyring/common.sh@12 -- # get_key key0 00:24:52.283 20:16:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:52.283 20:16:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:52.283 20:16:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:52.283 20:16:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:52.543 20:16:34 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:24:52.543 20:16:34 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:24:52.543 20:16:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:52.804 20:16:34 -- keyring/file.sh@101 -- # get_key key0 00:24:52.804 20:16:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:52.804 20:16:34 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:52.804 20:16:34 -- keyring/file.sh@101 -- # jq -r .removed 00:24:52.804 20:16:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:53.064 20:16:35 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:24:53.064 20:16:35 -- keyring/file.sh@102 -- # get_refcnt key0 00:24:53.064 20:16:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:53.064 20:16:35 -- keyring/common.sh@12 -- # get_key key0 00:24:53.064 20:16:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:53.064 20:16:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:53.064 20:16:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:53.323 20:16:35 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:24:53.323 20:16:35 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:53.323 20:16:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:53.583 20:16:35 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:24:53.583 20:16:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:53.583 20:16:35 -- keyring/file.sh@104 -- # jq length 00:24:53.583 20:16:35 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:24:53.583 20:16:35 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.I9lS2efn6j 00:24:53.583 20:16:35 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.I9lS2efn6j 00:24:53.843 20:16:36 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5VmcPVyM9G 00:24:53.843 20:16:36 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5VmcPVyM9G 00:24:54.101 20:16:36 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:54.101 20:16:36 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:54.360 nvme0n1 00:24:54.360 20:16:36 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:24:54.360 20:16:36 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:54.619 20:16:36 -- keyring/file.sh@112 -- # config='{ 00:24:54.619 "subsystems": [ 00:24:54.619 { 00:24:54.619 "subsystem": "keyring", 00:24:54.619 "config": [ 00:24:54.619 { 00:24:54.619 "method": "keyring_file_add_key", 00:24:54.619 "params": { 00:24:54.619 "name": "key0", 00:24:54.619 "path": "/tmp/tmp.I9lS2efn6j" 00:24:54.619 } 00:24:54.619 }, 00:24:54.619 { 00:24:54.619 "method": "keyring_file_add_key", 00:24:54.619 "params": { 00:24:54.619 "name": "key1", 00:24:54.619 "path": "/tmp/tmp.5VmcPVyM9G" 00:24:54.619 } 00:24:54.619 } 00:24:54.619 ] 00:24:54.619 }, 00:24:54.619 { 00:24:54.619 "subsystem": "iobuf", 00:24:54.619 "config": [ 00:24:54.619 { 00:24:54.619 "method": "iobuf_set_options", 00:24:54.619 "params": { 00:24:54.619 "small_pool_count": 8192, 00:24:54.619 "large_pool_count": 1024, 00:24:54.619 "small_bufsize": 8192, 00:24:54.619 "large_bufsize": 135168 00:24:54.619 } 00:24:54.619 } 00:24:54.619 ] 00:24:54.619 }, 00:24:54.619 { 00:24:54.619 "subsystem": "sock", 00:24:54.619 "config": [ 00:24:54.619 { 00:24:54.619 "method": "sock_impl_set_options", 00:24:54.619 "params": { 00:24:54.619 "impl_name": "uring", 00:24:54.619 "recv_buf_size": 2097152, 00:24:54.619 "send_buf_size": 2097152, 00:24:54.619 "enable_recv_pipe": true, 00:24:54.619 "enable_quickack": false, 00:24:54.619 "enable_placement_id": 0, 00:24:54.619 "enable_zerocopy_send_server": false, 00:24:54.619 "enable_zerocopy_send_client": false, 00:24:54.619 "zerocopy_threshold": 0, 00:24:54.619 "tls_version": 0, 00:24:54.619 "enable_ktls": false 00:24:54.619 } 00:24:54.619 }, 00:24:54.619 { 00:24:54.619 "method": "sock_impl_set_options", 00:24:54.619 "params": { 00:24:54.619 "impl_name": "posix", 00:24:54.619 "recv_buf_size": 2097152, 00:24:54.619 "send_buf_size": 2097152, 00:24:54.619 "enable_recv_pipe": true, 00:24:54.619 "enable_quickack": false, 00:24:54.619 "enable_placement_id": 0, 00:24:54.619 "enable_zerocopy_send_server": true, 00:24:54.619 "enable_zerocopy_send_client": false, 00:24:54.619 "zerocopy_threshold": 0, 00:24:54.619 "tls_version": 0, 00:24:54.619 "enable_ktls": false 00:24:54.619 } 00:24:54.619 }, 00:24:54.619 { 00:24:54.619 "method": "sock_impl_set_options", 00:24:54.619 "params": { 00:24:54.619 "impl_name": "ssl", 00:24:54.619 "recv_buf_size": 4096, 00:24:54.619 "send_buf_size": 4096, 00:24:54.619 "enable_recv_pipe": true, 00:24:54.619 "enable_quickack": false, 00:24:54.619 "enable_placement_id": 0, 00:24:54.619 "enable_zerocopy_send_server": true, 00:24:54.619 "enable_zerocopy_send_client": false, 00:24:54.619 "zerocopy_threshold": 0, 00:24:54.619 "tls_version": 0, 00:24:54.619 "enable_ktls": false 00:24:54.619 } 00:24:54.619 } 00:24:54.619 ] 00:24:54.619 }, 00:24:54.619 { 00:24:54.619 "subsystem": "vmd", 00:24:54.619 "config": [] 00:24:54.619 }, 00:24:54.619 { 00:24:54.619 "subsystem": "accel", 00:24:54.619 "config": [ 00:24:54.619 { 00:24:54.619 "method": "accel_set_options", 00:24:54.619 "params": { 00:24:54.619 "small_cache_size": 128, 00:24:54.619 "large_cache_size": 16, 00:24:54.619 "task_count": 2048, 00:24:54.619 "sequence_count": 2048, 00:24:54.619 "buf_count": 2048 00:24:54.619 } 00:24:54.619 } 00:24:54.619 ] 00:24:54.619 }, 00:24:54.619 { 00:24:54.619 "subsystem": "bdev", 00:24:54.619 "config": [ 00:24:54.619 { 00:24:54.619 "method": "bdev_set_options", 00:24:54.619 "params": { 00:24:54.619 "bdev_io_pool_size": 65535, 00:24:54.619 "bdev_io_cache_size": 256, 00:24:54.619 "bdev_auto_examine": true, 00:24:54.619 "iobuf_small_cache_size": 128, 00:24:54.619 "iobuf_large_cache_size": 16 00:24:54.619 } 00:24:54.619 }, 00:24:54.619 { 00:24:54.619 "method": "bdev_raid_set_options", 00:24:54.619 "params": { 00:24:54.619 "process_window_size_kb": 1024 00:24:54.619 } 00:24:54.619 }, 00:24:54.619 { 00:24:54.619 "method": "bdev_iscsi_set_options", 00:24:54.619 "params": { 00:24:54.619 "timeout_sec": 30 00:24:54.619 } 00:24:54.619 }, 00:24:54.619 { 00:24:54.619 "method": "bdev_nvme_set_options", 00:24:54.619 "params": { 00:24:54.619 "action_on_timeout": "none", 00:24:54.619 "timeout_us": 0, 00:24:54.619 "timeout_admin_us": 0, 00:24:54.619 "keep_alive_timeout_ms": 10000, 00:24:54.619 "arbitration_burst": 0, 00:24:54.619 "low_priority_weight": 0, 00:24:54.619 "medium_priority_weight": 0, 00:24:54.619 "high_priority_weight": 0, 00:24:54.619 "nvme_adminq_poll_period_us": 10000, 00:24:54.619 "nvme_ioq_poll_period_us": 0, 00:24:54.619 "io_queue_requests": 512, 00:24:54.619 "delay_cmd_submit": true, 00:24:54.619 "transport_retry_count": 4, 00:24:54.619 "bdev_retry_count": 3, 00:24:54.619 "transport_ack_timeout": 0, 00:24:54.620 "ctrlr_loss_timeout_sec": 0, 00:24:54.620 "reconnect_delay_sec": 0, 00:24:54.620 "fast_io_fail_timeout_sec": 0, 00:24:54.620 "disable_auto_failback": false, 00:24:54.620 "generate_uuids": false, 00:24:54.620 "transport_tos": 0, 00:24:54.620 "nvme_error_stat": false, 00:24:54.620 "rdma_srq_size": 0, 00:24:54.620 "io_path_stat": false, 00:24:54.620 "allow_accel_sequence": false, 00:24:54.620 "rdma_max_cq_size": 0, 00:24:54.620 "rdma_cm_event_timeout_ms": 0, 00:24:54.620 "dhchap_digests": [ 00:24:54.620 "sha256", 00:24:54.620 "sha384", 00:24:54.620 "sha512" 00:24:54.620 ], 00:24:54.620 "dhchap_dhgroups": [ 00:24:54.620 "null", 00:24:54.620 "ffdhe2048", 00:24:54.620 "ffdhe3072", 00:24:54.620 "ffdhe4096", 00:24:54.620 "ffdhe6144", 00:24:54.620 "ffdhe8192" 00:24:54.620 ] 00:24:54.620 } 00:24:54.620 }, 00:24:54.620 { 00:24:54.620 "method": "bdev_nvme_attach_controller", 00:24:54.620 "params": { 00:24:54.620 "name": "nvme0", 00:24:54.620 "trtype": "TCP", 00:24:54.620 "adrfam": "IPv4", 00:24:54.620 "traddr": "127.0.0.1", 00:24:54.620 "trsvcid": "4420", 00:24:54.620 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:54.620 "prchk_reftag": false, 00:24:54.620 "prchk_guard": false, 00:24:54.620 "ctrlr_loss_timeout_sec": 0, 00:24:54.620 "reconnect_delay_sec": 0, 00:24:54.620 "fast_io_fail_timeout_sec": 0, 00:24:54.620 "psk": "key0", 00:24:54.620 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:54.620 "hdgst": false, 00:24:54.620 "ddgst": false 00:24:54.620 } 00:24:54.620 }, 00:24:54.620 { 00:24:54.620 "method": "bdev_nvme_set_hotplug", 00:24:54.620 "params": { 00:24:54.620 "period_us": 100000, 00:24:54.620 "enable": false 00:24:54.620 } 00:24:54.620 }, 00:24:54.620 { 00:24:54.620 "method": "bdev_wait_for_examine" 00:24:54.620 } 00:24:54.620 ] 00:24:54.620 }, 00:24:54.620 { 00:24:54.620 "subsystem": "nbd", 00:24:54.620 "config": [] 00:24:54.620 } 00:24:54.620 ] 00:24:54.620 }' 00:24:54.620 20:16:36 -- keyring/file.sh@114 -- # killprocess 81395 00:24:54.620 20:16:36 -- common/autotest_common.sh@936 -- # '[' -z 81395 ']' 00:24:54.620 20:16:36 -- common/autotest_common.sh@940 -- # kill -0 81395 00:24:54.620 20:16:36 -- common/autotest_common.sh@941 -- # uname 00:24:54.620 20:16:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:54.620 20:16:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81395 00:24:54.620 killing process with pid 81395 00:24:54.620 Received shutdown signal, test time was about 1.000000 seconds 00:24:54.620 00:24:54.620 Latency(us) 00:24:54.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.620 =================================================================================================================== 00:24:54.620 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.620 20:16:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:54.620 20:16:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:54.620 20:16:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81395' 00:24:54.620 20:16:36 -- common/autotest_common.sh@955 -- # kill 81395 00:24:54.620 20:16:36 -- common/autotest_common.sh@960 -- # wait 81395 00:24:54.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:54.878 20:16:37 -- keyring/file.sh@117 -- # bperfpid=81628 00:24:54.878 20:16:37 -- keyring/file.sh@119 -- # waitforlisten 81628 /var/tmp/bperf.sock 00:24:54.878 20:16:37 -- common/autotest_common.sh@817 -- # '[' -z 81628 ']' 00:24:54.878 20:16:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:54.878 20:16:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:54.878 20:16:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:54.878 20:16:37 -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:54.878 20:16:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:54.878 20:16:37 -- common/autotest_common.sh@10 -- # set +x 00:24:54.878 20:16:37 -- keyring/file.sh@115 -- # echo '{ 00:24:54.878 "subsystems": [ 00:24:54.878 { 00:24:54.878 "subsystem": "keyring", 00:24:54.878 "config": [ 00:24:54.878 { 00:24:54.878 "method": "keyring_file_add_key", 00:24:54.878 "params": { 00:24:54.878 "name": "key0", 00:24:54.878 "path": "/tmp/tmp.I9lS2efn6j" 00:24:54.878 } 00:24:54.878 }, 00:24:54.878 { 00:24:54.878 "method": "keyring_file_add_key", 00:24:54.878 "params": { 00:24:54.878 "name": "key1", 00:24:54.878 "path": "/tmp/tmp.5VmcPVyM9G" 00:24:54.878 } 00:24:54.878 } 00:24:54.878 ] 00:24:54.878 }, 00:24:54.878 { 00:24:54.878 "subsystem": "iobuf", 00:24:54.878 "config": [ 00:24:54.878 { 00:24:54.878 "method": "iobuf_set_options", 00:24:54.878 "params": { 00:24:54.878 "small_pool_count": 8192, 00:24:54.878 "large_pool_count": 1024, 00:24:54.878 "small_bufsize": 8192, 00:24:54.878 "large_bufsize": 135168 00:24:54.878 } 00:24:54.878 } 00:24:54.878 ] 00:24:54.878 }, 00:24:54.878 { 00:24:54.878 "subsystem": "sock", 00:24:54.878 "config": [ 00:24:54.878 { 00:24:54.878 "method": "sock_impl_set_options", 00:24:54.878 "params": { 00:24:54.878 "impl_name": "uring", 00:24:54.878 "recv_buf_size": 2097152, 00:24:54.878 "send_buf_size": 2097152, 00:24:54.878 "enable_recv_pipe": true, 00:24:54.878 "enable_quickack": false, 00:24:54.878 "enable_placement_id": 0, 00:24:54.878 "enable_zerocopy_send_server": false, 00:24:54.878 "enable_zerocopy_send_client": false, 00:24:54.878 "zerocopy_threshold": 0, 00:24:54.878 "tls_version": 0, 00:24:54.878 "enable_ktls": false 00:24:54.878 } 00:24:54.878 }, 00:24:54.878 { 00:24:54.878 "method": "sock_impl_set_options", 00:24:54.878 "params": { 00:24:54.878 "impl_name": "posix", 00:24:54.878 "recv_buf_size": 2097152, 00:24:54.878 "send_buf_size": 2097152, 00:24:54.878 "enable_recv_pipe": true, 00:24:54.878 "enable_quickack": false, 00:24:54.878 "enable_placement_id": 0, 00:24:54.878 "enable_zerocopy_send_server": true, 00:24:54.878 "enable_zerocopy_send_client": false, 00:24:54.878 "zerocopy_threshold": 0, 00:24:54.878 "tls_version": 0, 00:24:54.878 "enable_ktls": false 00:24:54.878 } 00:24:54.878 }, 00:24:54.878 { 00:24:54.878 "method": "sock_impl_set_options", 00:24:54.878 "params": { 00:24:54.878 "impl_name": "ssl", 00:24:54.878 "recv_buf_size": 4096, 00:24:54.878 "send_buf_size": 4096, 00:24:54.878 "enable_recv_pipe": true, 00:24:54.878 "enable_quickack": false, 00:24:54.878 "enable_placement_id": 0, 00:24:54.878 "enable_zerocopy_send_server": true, 00:24:54.878 "enable_zerocopy_send_client": false, 00:24:54.878 "zerocopy_threshold": 0, 00:24:54.878 "tls_version": 0, 00:24:54.878 "enable_ktls": false 00:24:54.878 } 00:24:54.878 } 00:24:54.878 ] 00:24:54.878 }, 00:24:54.878 { 00:24:54.878 "subsystem": "vmd", 00:24:54.878 "config": [] 00:24:54.878 }, 00:24:54.878 { 00:24:54.878 "subsystem": "accel", 00:24:54.878 "config": [ 00:24:54.878 { 00:24:54.878 "method": "accel_set_options", 00:24:54.878 "params": { 00:24:54.878 "small_cache_size": 128, 00:24:54.878 "large_cache_size": 16, 00:24:54.878 "task_count": 2048, 00:24:54.878 "sequence_count": 2048, 00:24:54.878 "buf_count": 2048 00:24:54.878 } 00:24:54.878 } 00:24:54.878 ] 00:24:54.878 }, 00:24:54.878 { 00:24:54.879 "subsystem": "bdev", 00:24:54.879 "config": [ 00:24:54.879 { 00:24:54.879 "method": "bdev_set_options", 00:24:54.879 "params": { 00:24:54.879 "bdev_io_pool_size": 65535, 00:24:54.879 "bdev_io_cache_size": 256, 00:24:54.879 "bdev_auto_examine": true, 00:24:54.879 "iobuf_small_cache_size": 128, 00:24:54.879 "iobuf_large_cache_size": 16 00:24:54.879 } 00:24:54.879 }, 00:24:54.879 { 00:24:54.879 "method": "bdev_raid_set_options", 00:24:54.879 "params": { 00:24:54.879 "process_window_size_kb": 1024 00:24:54.879 } 00:24:54.879 }, 00:24:54.879 { 00:24:54.879 "method": "bdev_iscsi_set_options", 00:24:54.879 "params": { 00:24:54.879 "timeout_sec": 30 00:24:54.879 } 00:24:54.879 }, 00:24:54.879 { 00:24:54.879 "method": "bdev_nvme_set_options", 00:24:54.879 "params": { 00:24:54.879 "action_on_timeout": "none", 00:24:54.879 "timeout_us": 0, 00:24:54.879 "timeout_admin_us": 0, 00:24:54.879 "keep_alive_timeout_ms": 10000, 00:24:54.879 "arbitration_burst": 0, 00:24:54.879 "low_priority_weight": 0, 00:24:54.879 "medium_priority_weight": 0, 00:24:54.879 "high_priority_weight": 0, 00:24:54.879 "nvme_adminq_poll_period_us": 10000, 00:24:54.879 "nvme_ioq_poll_period_us": 0, 00:24:54.879 "io_queue_requests": 512, 00:24:54.879 "delay_cmd_submit": true, 00:24:54.879 "transport_retry_count": 4, 00:24:54.879 "bdev_retry_count": 3, 00:24:54.879 "transport_ack_timeout": 0, 00:24:54.879 "ctrlr_loss_timeout_sec": 0, 00:24:54.879 "reconnect_delay_sec": 0, 00:24:54.879 "fast_io_fail_timeout_sec": 0, 00:24:54.879 "disable_auto_failback": false, 00:24:54.879 "generate_uuids": false, 00:24:54.879 "transport_tos": 0, 00:24:54.879 "nvme_error_stat": false, 00:24:54.879 "rdma_srq_size": 0, 00:24:54.879 "io_path_stat": false, 00:24:54.879 "allow_accel_sequence": false, 00:24:54.879 "rdma_max_cq_size": 0, 00:24:54.879 "rdma_cm_event_timeout_ms": 0, 00:24:54.879 "dhchap_digests": [ 00:24:54.879 "sha256", 00:24:54.879 "sha384", 00:24:54.879 "sha512" 00:24:54.879 ], 00:24:54.879 "dhchap_dhgroups": [ 00:24:54.879 "null", 00:24:54.879 "ffdhe2048", 00:24:54.879 "ffdhe3072", 00:24:54.879 "ffdhe4096", 00:24:54.879 "ffdhe6144", 00:24:54.879 "ffdhe8192" 00:24:54.879 ] 00:24:54.879 } 00:24:54.879 }, 00:24:54.879 { 00:24:54.879 "method": "bdev_nvme_attach_controller", 00:24:54.879 "params": { 00:24:54.879 "name": "nvme0", 00:24:54.879 "trtype": "TCP", 00:24:54.879 "adrfam": "IPv4", 00:24:54.879 "traddr": "127.0.0.1", 00:24:54.879 "trsvcid": "4420", 00:24:54.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:54.879 "prchk_reftag": false, 00:24:54.879 "prchk_guard": false, 00:24:54.879 "ctrlr_loss_timeout_sec": 0, 00:24:54.879 "reconnect_delay_sec": 0, 00:24:54.879 "fast_io_fail_timeout_sec": 0, 00:24:54.879 "psk": "key0", 00:24:54.879 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:54.879 "hdgst": false, 00:24:54.879 "ddgst": false 00:24:54.879 } 00:24:54.879 }, 00:24:54.879 { 00:24:54.879 "method": "bdev_nvme_set_hotplug", 00:24:54.879 "params": { 00:24:54.879 "period_us": 100000, 00:24:54.879 "enable": false 00:24:54.879 } 00:24:54.879 }, 00:24:54.879 { 00:24:54.879 "method": "bdev_wait_for_examine" 00:24:54.879 } 00:24:54.879 ] 00:24:54.879 }, 00:24:54.879 { 00:24:54.879 "subsystem": "nbd", 00:24:54.879 "config": [] 00:24:54.879 } 00:24:54.879 ] 00:24:54.879 }' 00:24:54.879 [2024-04-24 20:16:37.077520] Starting SPDK v24.05-pre git sha1 4907d1565 / DPDK 23.11.0 initialization... 00:24:54.879 [2024-04-24 20:16:37.078221] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81628 ] 00:24:55.137 [2024-04-24 20:16:37.215195] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.137 [2024-04-24 20:16:37.325075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.396 [2024-04-24 20:16:37.490716] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:55.965 20:16:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:55.965 20:16:38 -- common/autotest_common.sh@850 -- # return 0 00:24:55.965 20:16:38 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:24:55.965 20:16:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:55.965 20:16:38 -- keyring/file.sh@120 -- # jq length 00:24:56.225 20:16:38 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:24:56.225 20:16:38 -- keyring/file.sh@121 -- # get_refcnt key0 00:24:56.225 20:16:38 -- keyring/common.sh@12 -- # get_key key0 00:24:56.225 20:16:38 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:56.225 20:16:38 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:56.225 20:16:38 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:56.225 20:16:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:56.483 20:16:38 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:56.484 20:16:38 -- keyring/file.sh@122 -- # get_refcnt key1 00:24:56.484 20:16:38 -- keyring/common.sh@12 -- # get_key key1 00:24:56.484 20:16:38 -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:56.484 20:16:38 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:56.484 20:16:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:56.484 20:16:38 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:56.743 20:16:38 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:24:56.743 20:16:38 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:24:56.743 20:16:38 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:24:56.743 20:16:38 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:56.743 20:16:38 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:24:56.743 20:16:38 -- keyring/file.sh@1 -- # cleanup 00:24:56.743 20:16:38 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.I9lS2efn6j /tmp/tmp.5VmcPVyM9G 00:24:56.743 20:16:38 -- keyring/file.sh@20 -- # killprocess 81628 00:24:56.743 20:16:38 -- common/autotest_common.sh@936 -- # '[' -z 81628 ']' 00:24:56.743 20:16:38 -- common/autotest_common.sh@940 -- # kill -0 81628 00:24:56.743 20:16:38 -- common/autotest_common.sh@941 -- # uname 00:24:56.743 20:16:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:56.743 20:16:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81628 00:24:57.002 killing process with pid 81628 00:24:57.002 Received shutdown signal, test time was about 1.000000 seconds 00:24:57.002 00:24:57.002 Latency(us) 00:24:57.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.002 =================================================================================================================== 00:24:57.002 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:57.002 20:16:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:57.002 20:16:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:57.002 20:16:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81628' 00:24:57.002 20:16:39 -- common/autotest_common.sh@955 -- # kill 81628 00:24:57.002 20:16:39 -- common/autotest_common.sh@960 -- # wait 81628 00:24:57.002 20:16:39 -- keyring/file.sh@21 -- # killprocess 81378 00:24:57.002 20:16:39 -- common/autotest_common.sh@936 -- # '[' -z 81378 ']' 00:24:57.002 20:16:39 -- common/autotest_common.sh@940 -- # kill -0 81378 00:24:57.002 20:16:39 -- common/autotest_common.sh@941 -- # uname 00:24:57.002 20:16:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:57.002 20:16:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81378 00:24:57.002 killing process with pid 81378 00:24:57.002 20:16:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:57.002 20:16:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:57.002 20:16:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81378' 00:24:57.002 20:16:39 -- common/autotest_common.sh@955 -- # kill 81378 00:24:57.002 [2024-04-24 20:16:39.251159] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:57.002 [2024-04-24 20:16:39.251196] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:57.002 20:16:39 -- common/autotest_common.sh@960 -- # wait 81378 00:24:57.570 00:24:57.570 real 0m14.029s 00:24:57.570 user 0m34.433s 00:24:57.570 sys 0m2.689s 00:24:57.570 20:16:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:57.570 20:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:57.570 ************************************ 00:24:57.570 END TEST keyring_file 00:24:57.570 ************************************ 00:24:57.570 20:16:39 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:24:57.570 20:16:39 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:24:57.570 20:16:39 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:24:57.570 20:16:39 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:24:57.570 20:16:39 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:57.570 20:16:39 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:24:57.570 20:16:39 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:57.570 20:16:39 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:24:57.570 20:16:39 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:24:57.570 20:16:39 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:24:57.570 20:16:39 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:57.570 20:16:39 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:24:57.570 20:16:39 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:24:57.570 20:16:39 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:24:57.570 20:16:39 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:24:57.570 20:16:39 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:24:57.570 20:16:39 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:24:57.570 20:16:39 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:24:57.570 20:16:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:57.570 20:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:57.570 20:16:39 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:24:57.570 20:16:39 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:24:57.570 20:16:39 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:24:57.570 20:16:39 -- common/autotest_common.sh@10 -- # set +x 00:24:59.477 INFO: APP EXITING 00:24:59.477 INFO: killing all VMs 00:24:59.477 INFO: killing vhost app 00:24:59.477 INFO: EXIT DONE 00:25:00.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:00.416 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:00.416 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:00.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:00.986 Cleaning 00:25:00.986 Removing: /var/run/dpdk/spdk0/config 00:25:01.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:01.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:01.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:01.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:01.246 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:01.246 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:01.246 Removing: /var/run/dpdk/spdk1/config 00:25:01.246 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:01.246 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:01.246 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:01.246 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:01.246 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:01.246 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:01.246 Removing: /var/run/dpdk/spdk2/config 00:25:01.246 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:01.246 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:01.246 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:01.246 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:01.246 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:01.246 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:01.246 Removing: /var/run/dpdk/spdk3/config 00:25:01.246 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:01.246 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:01.246 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:01.246 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:01.246 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:01.246 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:01.246 Removing: /var/run/dpdk/spdk4/config 00:25:01.246 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:01.246 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:01.246 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:01.246 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:01.246 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:01.246 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:01.246 Removing: /dev/shm/nvmf_trace.0 00:25:01.246 Removing: /dev/shm/spdk_tgt_trace.pid58427 00:25:01.246 Removing: /var/run/dpdk/spdk0 00:25:01.246 Removing: /var/run/dpdk/spdk1 00:25:01.246 Removing: /var/run/dpdk/spdk2 00:25:01.246 Removing: /var/run/dpdk/spdk3 00:25:01.246 Removing: /var/run/dpdk/spdk4 00:25:01.246 Removing: /var/run/dpdk/spdk_pid58263 00:25:01.246 Removing: /var/run/dpdk/spdk_pid58427 00:25:01.246 Removing: /var/run/dpdk/spdk_pid58657 00:25:01.246 Removing: /var/run/dpdk/spdk_pid58747 00:25:01.246 Removing: /var/run/dpdk/spdk_pid58775 00:25:01.246 Removing: /var/run/dpdk/spdk_pid58894 00:25:01.246 Removing: /var/run/dpdk/spdk_pid58911 00:25:01.246 Removing: /var/run/dpdk/spdk_pid59040 00:25:01.246 Removing: /var/run/dpdk/spdk_pid59231 00:25:01.246 Removing: /var/run/dpdk/spdk_pid59375 00:25:01.246 Removing: /var/run/dpdk/spdk_pid59446 00:25:01.246 Removing: /var/run/dpdk/spdk_pid59527 00:25:01.246 Removing: /var/run/dpdk/spdk_pid59624 00:25:01.246 Removing: /var/run/dpdk/spdk_pid59705 00:25:01.246 Removing: /var/run/dpdk/spdk_pid59747 00:25:01.246 Removing: /var/run/dpdk/spdk_pid59782 00:25:01.246 Removing: /var/run/dpdk/spdk_pid59855 00:25:01.246 Removing: /var/run/dpdk/spdk_pid59978 00:25:01.246 Removing: /var/run/dpdk/spdk_pid60410 00:25:01.246 Removing: /var/run/dpdk/spdk_pid60467 00:25:01.246 Removing: /var/run/dpdk/spdk_pid60511 00:25:01.246 Removing: /var/run/dpdk/spdk_pid60527 00:25:01.246 Removing: /var/run/dpdk/spdk_pid60598 00:25:01.246 Removing: /var/run/dpdk/spdk_pid60614 00:25:01.246 Removing: /var/run/dpdk/spdk_pid60680 00:25:01.505 Removing: /var/run/dpdk/spdk_pid60690 00:25:01.505 Removing: /var/run/dpdk/spdk_pid60745 00:25:01.505 Removing: /var/run/dpdk/spdk_pid60762 00:25:01.505 Removing: /var/run/dpdk/spdk_pid60807 00:25:01.505 Removing: /var/run/dpdk/spdk_pid60824 00:25:01.505 Removing: /var/run/dpdk/spdk_pid60959 00:25:01.505 Removing: /var/run/dpdk/spdk_pid60997 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61078 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61138 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61167 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61243 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61288 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61321 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61366 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61399 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61443 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61477 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61521 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61556 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61600 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61639 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61677 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61717 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61755 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61794 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61833 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61877 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61920 00:25:01.505 Removing: /var/run/dpdk/spdk_pid61961 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62000 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62039 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62119 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62217 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62547 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62564 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62599 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62618 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62628 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62653 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62666 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62687 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62706 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62722 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62737 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62756 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62770 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62785 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62804 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62819 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62839 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62858 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62871 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62887 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62927 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62940 00:25:01.505 Removing: /var/run/dpdk/spdk_pid62970 00:25:01.505 Removing: /var/run/dpdk/spdk_pid63043 00:25:01.505 Removing: /var/run/dpdk/spdk_pid63081 00:25:01.505 Removing: /var/run/dpdk/spdk_pid63085 00:25:01.505 Removing: /var/run/dpdk/spdk_pid63124 00:25:01.505 Removing: /var/run/dpdk/spdk_pid63133 00:25:01.505 Removing: /var/run/dpdk/spdk_pid63141 00:25:01.505 Removing: /var/run/dpdk/spdk_pid63193 00:25:01.505 Removing: /var/run/dpdk/spdk_pid63201 00:25:01.505 Removing: /var/run/dpdk/spdk_pid63239 00:25:01.505 Removing: /var/run/dpdk/spdk_pid63252 00:25:01.505 Removing: /var/run/dpdk/spdk_pid63258 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63273 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63277 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63292 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63296 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63311 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63338 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63375 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63385 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63418 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63433 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63435 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63485 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63497 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63527 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63540 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63548 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63555 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63563 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63570 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63578 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63585 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63668 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63716 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63824 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63870 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63916 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63926 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63948 00:25:01.765 Removing: /var/run/dpdk/spdk_pid63968 00:25:01.765 Removing: /var/run/dpdk/spdk_pid64014 00:25:01.765 Removing: /var/run/dpdk/spdk_pid64025 00:25:01.765 Removing: /var/run/dpdk/spdk_pid64111 00:25:01.765 Removing: /var/run/dpdk/spdk_pid64127 00:25:01.765 Removing: /var/run/dpdk/spdk_pid64165 00:25:01.765 Removing: /var/run/dpdk/spdk_pid64236 00:25:01.765 Removing: /var/run/dpdk/spdk_pid64292 00:25:01.765 Removing: /var/run/dpdk/spdk_pid64313 00:25:01.765 Removing: /var/run/dpdk/spdk_pid64425 00:25:01.765 Removing: /var/run/dpdk/spdk_pid64477 00:25:01.765 Removing: /var/run/dpdk/spdk_pid64519 00:25:01.765 Removing: /var/run/dpdk/spdk_pid64779 00:25:01.765 Removing: /var/run/dpdk/spdk_pid64893 00:25:01.765 Removing: /var/run/dpdk/spdk_pid64925 00:25:01.765 Removing: /var/run/dpdk/spdk_pid65259 00:25:01.765 Removing: /var/run/dpdk/spdk_pid65296 00:25:01.765 Removing: /var/run/dpdk/spdk_pid65604 00:25:01.765 Removing: /var/run/dpdk/spdk_pid66012 00:25:01.765 Removing: /var/run/dpdk/spdk_pid66274 00:25:01.765 Removing: /var/run/dpdk/spdk_pid67055 00:25:01.765 Removing: /var/run/dpdk/spdk_pid67880 00:25:01.765 Removing: /var/run/dpdk/spdk_pid68002 00:25:01.765 Removing: /var/run/dpdk/spdk_pid68064 00:25:01.765 Removing: /var/run/dpdk/spdk_pid69330 00:25:01.765 Removing: /var/run/dpdk/spdk_pid69551 00:25:01.765 Removing: /var/run/dpdk/spdk_pid69849 00:25:01.765 Removing: /var/run/dpdk/spdk_pid69958 00:25:01.765 Removing: /var/run/dpdk/spdk_pid70097 00:25:01.765 Removing: /var/run/dpdk/spdk_pid70119 00:25:01.765 Removing: /var/run/dpdk/spdk_pid70152 00:25:01.765 Removing: /var/run/dpdk/spdk_pid70174 00:25:01.765 Removing: /var/run/dpdk/spdk_pid70266 00:25:01.765 Removing: /var/run/dpdk/spdk_pid70397 00:25:01.765 Removing: /var/run/dpdk/spdk_pid70547 00:25:01.765 Removing: /var/run/dpdk/spdk_pid70622 00:25:01.765 Removing: /var/run/dpdk/spdk_pid70804 00:25:01.765 Removing: /var/run/dpdk/spdk_pid70887 00:25:01.765 Removing: /var/run/dpdk/spdk_pid70980 00:25:02.025 Removing: /var/run/dpdk/spdk_pid71289 00:25:02.025 Removing: /var/run/dpdk/spdk_pid71673 00:25:02.025 Removing: /var/run/dpdk/spdk_pid71675 00:25:02.025 Removing: /var/run/dpdk/spdk_pid71954 00:25:02.025 Removing: /var/run/dpdk/spdk_pid71968 00:25:02.025 Removing: /var/run/dpdk/spdk_pid71987 00:25:02.025 Removing: /var/run/dpdk/spdk_pid72018 00:25:02.025 Removing: /var/run/dpdk/spdk_pid72023 00:25:02.025 Removing: /var/run/dpdk/spdk_pid72317 00:25:02.025 Removing: /var/run/dpdk/spdk_pid72361 00:25:02.025 Removing: /var/run/dpdk/spdk_pid72636 00:25:02.025 Removing: /var/run/dpdk/spdk_pid72832 00:25:02.025 Removing: /var/run/dpdk/spdk_pid73213 00:25:02.025 Removing: /var/run/dpdk/spdk_pid73702 00:25:02.025 Removing: /var/run/dpdk/spdk_pid74301 00:25:02.025 Removing: /var/run/dpdk/spdk_pid74303 00:25:02.025 Removing: /var/run/dpdk/spdk_pid76223 00:25:02.025 Removing: /var/run/dpdk/spdk_pid76284 00:25:02.025 Removing: /var/run/dpdk/spdk_pid76343 00:25:02.025 Removing: /var/run/dpdk/spdk_pid76399 00:25:02.025 Removing: /var/run/dpdk/spdk_pid76524 00:25:02.025 Removing: /var/run/dpdk/spdk_pid76583 00:25:02.025 Removing: /var/run/dpdk/spdk_pid76639 00:25:02.025 Removing: /var/run/dpdk/spdk_pid76699 00:25:02.025 Removing: /var/run/dpdk/spdk_pid77027 00:25:02.025 Removing: /var/run/dpdk/spdk_pid78195 00:25:02.025 Removing: /var/run/dpdk/spdk_pid78341 00:25:02.025 Removing: /var/run/dpdk/spdk_pid78583 00:25:02.025 Removing: /var/run/dpdk/spdk_pid79149 00:25:02.025 Removing: /var/run/dpdk/spdk_pid79312 00:25:02.025 Removing: /var/run/dpdk/spdk_pid79478 00:25:02.025 Removing: /var/run/dpdk/spdk_pid79575 00:25:02.025 Removing: /var/run/dpdk/spdk_pid79752 00:25:02.025 Removing: /var/run/dpdk/spdk_pid79866 00:25:02.025 Removing: /var/run/dpdk/spdk_pid80539 00:25:02.025 Removing: /var/run/dpdk/spdk_pid80576 00:25:02.025 Removing: /var/run/dpdk/spdk_pid80610 00:25:02.025 Removing: /var/run/dpdk/spdk_pid80872 00:25:02.025 Removing: /var/run/dpdk/spdk_pid80902 00:25:02.025 Removing: /var/run/dpdk/spdk_pid80937 00:25:02.025 Removing: /var/run/dpdk/spdk_pid81378 00:25:02.025 Removing: /var/run/dpdk/spdk_pid81395 00:25:02.025 Removing: /var/run/dpdk/spdk_pid81628 00:25:02.025 Clean 00:25:02.285 20:16:44 -- common/autotest_common.sh@1437 -- # return 0 00:25:02.285 20:16:44 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:25:02.285 20:16:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:02.285 20:16:44 -- common/autotest_common.sh@10 -- # set +x 00:25:02.285 20:16:44 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:25:02.285 20:16:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:02.285 20:16:44 -- common/autotest_common.sh@10 -- # set +x 00:25:02.285 20:16:44 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:02.285 20:16:44 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:02.285 20:16:44 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:02.285 20:16:44 -- spdk/autotest.sh@389 -- # hash lcov 00:25:02.285 20:16:44 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:25:02.285 20:16:44 -- spdk/autotest.sh@391 -- # hostname 00:25:02.285 20:16:44 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:02.544 geninfo: WARNING: invalid characters removed from testname! 00:25:29.129 20:17:07 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:29.129 20:17:11 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:31.661 20:17:13 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:33.566 20:17:15 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:36.113 20:17:18 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:38.650 20:17:20 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:41.184 20:17:22 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:41.184 20:17:22 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:41.184 20:17:22 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:41.184 20:17:22 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:41.184 20:17:22 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:41.185 20:17:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.185 20:17:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.185 20:17:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.185 20:17:22 -- paths/export.sh@5 -- $ export PATH 00:25:41.185 20:17:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.185 20:17:22 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:41.185 20:17:22 -- common/autobuild_common.sh@435 -- $ date +%s 00:25:41.185 20:17:22 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713989842.XXXXXX 00:25:41.185 20:17:22 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713989842.kwbjRD 00:25:41.185 20:17:23 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:25:41.185 20:17:23 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:25:41.185 20:17:23 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:25:41.185 20:17:23 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:41.185 20:17:23 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:41.185 20:17:23 -- common/autobuild_common.sh@451 -- $ get_config_params 00:25:41.185 20:17:23 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:25:41.185 20:17:23 -- common/autotest_common.sh@10 -- $ set +x 00:25:41.185 20:17:23 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:25:41.185 20:17:23 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:25:41.185 20:17:23 -- pm/common@17 -- $ local monitor 00:25:41.185 20:17:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:41.185 20:17:23 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=83367 00:25:41.185 20:17:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:41.185 20:17:23 -- pm/common@21 -- $ date +%s 00:25:41.185 20:17:23 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=83369 00:25:41.185 20:17:23 -- pm/common@26 -- $ sleep 1 00:25:41.185 20:17:23 -- pm/common@21 -- $ date +%s 00:25:41.185 20:17:23 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713989843 00:25:41.185 20:17:23 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713989843 00:25:41.185 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713989843_collect-vmstat.pm.log 00:25:41.185 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713989843_collect-cpu-load.pm.log 00:25:42.123 20:17:24 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:25:42.123 20:17:24 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:25:42.123 20:17:24 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:25:42.123 20:17:24 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:42.123 20:17:24 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:42.123 20:17:24 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:42.123 20:17:24 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:42.123 20:17:24 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:42.123 20:17:24 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:42.123 20:17:24 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:42.123 20:17:24 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:42.123 20:17:24 -- pm/common@30 -- $ signal_monitor_resources TERM 00:25:42.123 20:17:24 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:25:42.123 20:17:24 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:42.123 20:17:24 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:42.123 20:17:24 -- pm/common@45 -- $ pid=83375 00:25:42.123 20:17:24 -- pm/common@52 -- $ sudo kill -TERM 83375 00:25:42.123 20:17:24 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:42.123 20:17:24 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:42.123 20:17:24 -- pm/common@45 -- $ pid=83374 00:25:42.123 20:17:24 -- pm/common@52 -- $ sudo kill -TERM 83374 00:25:42.123 + [[ -n 5314 ]] 00:25:42.123 + sudo kill 5314 00:25:42.132 [Pipeline] } 00:25:42.150 [Pipeline] // timeout 00:25:42.156 [Pipeline] } 00:25:42.170 [Pipeline] // stage 00:25:42.175 [Pipeline] } 00:25:42.190 [Pipeline] // catchError 00:25:42.214 [Pipeline] stage 00:25:42.216 [Pipeline] { (Stop VM) 00:25:42.226 [Pipeline] sh 00:25:42.500 + vagrant halt 00:25:45.073 ==> default: Halting domain... 00:25:53.206 [Pipeline] sh 00:25:53.489 + vagrant destroy -f 00:25:56.821 ==> default: Removing domain... 00:25:56.834 [Pipeline] sh 00:25:57.117 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:25:57.128 [Pipeline] } 00:25:57.149 [Pipeline] // stage 00:25:57.155 [Pipeline] } 00:25:57.174 [Pipeline] // dir 00:25:57.179 [Pipeline] } 00:25:57.198 [Pipeline] // wrap 00:25:57.205 [Pipeline] } 00:25:57.220 [Pipeline] // catchError 00:25:57.229 [Pipeline] stage 00:25:57.231 [Pipeline] { (Epilogue) 00:25:57.246 [Pipeline] sh 00:25:57.530 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:04.114 [Pipeline] catchError 00:26:04.116 [Pipeline] { 00:26:04.198 [Pipeline] sh 00:26:04.481 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:04.481 Artifacts sizes are good 00:26:04.491 [Pipeline] } 00:26:04.508 [Pipeline] // catchError 00:26:04.518 [Pipeline] archiveArtifacts 00:26:04.525 Archiving artifacts 00:26:04.726 [Pipeline] cleanWs 00:26:04.740 [WS-CLEANUP] Deleting project workspace... 00:26:04.740 [WS-CLEANUP] Deferred wipeout is used... 00:26:04.746 [WS-CLEANUP] done 00:26:04.748 [Pipeline] } 00:26:04.763 [Pipeline] // stage 00:26:04.768 [Pipeline] } 00:26:04.786 [Pipeline] // node 00:26:04.792 [Pipeline] End of Pipeline 00:26:04.834 Finished: SUCCESS